Build an Agent
1. Registration & Profile Editing
1.1 Register/Login
Open the platform link agents.dyna.ai to enter the login registration page.
Currently, two methods are supported: "email account registration" and "mobile phone number registration".
Enter your email/phone number, click "Get Verification Code", and you will receive an email/text message sent by the platform. After entering the verification code, password, and confirmation password, you can successfully register.
After successful registration, you can log in to the platform with the registered email and mobile phone number.
Tip: An invitation code is required for registration, which can be obtained by consulting pre-sales colleagues.

1.2 Edit data
After logging in, go to the bottom left corner "Personal Center - Account Settings" to change personal information.

2. Create a bot
Click "Create Bot" under the robot list on the left and click "Create Now" on the homepage to enter the robot configuration page.

Fill in the required information such as "Bot Name", "Welcome Message", and "Role Settings" on the bot configuration page to save it. (You can test the dialogue effect on the right side)
After saving, you can complete the creation of a robot.
3. Change information configuration

3.1 Basic configuration
Basic information: profile photo, robot name, welcome message
Models and languages: Dialogue models, language settings
Role settings: role settings, variables
Other: short-term memory and long-term memory configuration, Internet search capabilities, opening hot questions
Right dialog box: Effect testing is available
3.1.1 Dialogue model(LLM)
Click on the dialogue model to enter the model configuration pop-up window.
Currently, the platform supports the selection of BR_LLM and multiple open and closed source large language models. The mouse hovers over the "?" to display the characteristics of the model. You can switch different models to test the reply effect according to actual needs.
"Function call": Displays whether the model supports function call. The plug-in configured for the model that does not support function call is not effective.
"Divergence": Adjust whether the model reply content is rigorous or divergent.
"Top-P": Adjust whether the model reply content is creative or concentrated. A higher value (close to 1) will increase the diversity and creativity of the text, while a lower value (close to 0) will make the text more concentrated and coherent.
"Reply upper limit": Adjust this parameter to control the maximum length of the large language model reply content.

3.1.2 Language Settings
Knowledge Base language: Related to the knowledge embedding (indexing) model. The default is Chinese, and other languages such as English/French/Japanese/Indonesian can be selected. (Note: This option is optional when the robot is first created and cannot be changed after saving)
Knowledge Embedding Model: Import the index model of Knowledge Base, which is related to the language of Knowledge Base. It is recommended to directly select "Recommendation Model". (Note: This option is optional when creating the robot for the first time and cannot be changed after saving)
3.1.3 Character setting
"Character setting" is a very important configuration information. By filling in the content here, we can use natural language to directly teach and guide the robot, help the large language model understand what you want it to do for you, what you don't want it to do, and what style to reply in.
Clearly list the robot's role, introduction, tasks and goals, workflow, skills, limitations, etc., which can improve the robot's response effect.

View example:
1.Examples show a variety of prompt word templates, which can be directly selected by the user;
2.Examples can be added, modified, and reordered by administrators in the backend.


Large Model Optimization: The Large Model Optimization feature supports refining the original prompts provided by users.
For example: If you want to customize an AI agent with the role of a lawyer, you can enter "You are a lawyer" in the [Character Settings] field and click the "Large Model Optimization" button. The large model will then optimize your original prompt. The optimized prompt allows you to modify, delete, use directly, or regenerate—all while staying true to your original intent.
Optimizing prompts requires you to provide the original prompt, such as "You are a lawyer."/"Lawyer"
3.1.4 Variable
Through variable configuration, the content in "role setting" can be replaced by variables, thereby achieving dynamic display.
The preset time variable {{cTime}} will take effect after being enabled.
Custom variables can be configured, which need to be filled in: variable Key, field name, default value. Variables can be filled in the "Role Settings" through API interface calls. If the variable has a default value, the default value will be filled in during debugging.
When in use, enter the "{" symbol in the "Role Settings" above, and a drop-down box will appear to view optional variables. Click to add successfully. (Note: Only variables in orange font are added successfully)

3.1.5 Short-term memory
When turned on, the conversation content provided by the user will be automatically collected and summarized for storage, which can be used in ordinary conversations through retrieval and recall, or in the conversation flow.
The range of the number of recent conversations can be configured.
3.1.6 Long-term memory
These key contents will be stored in an orderly manner to simulate the long-term memory function of the brain. In this way, in the subsequent use process, whether it is information query or helping the robot to respond more in line with user needs, it can play an important role.
You can set a switch in the BOT configuration. When turned on, it will automatically extract the key content of the user's expression from the conversation between the user and the robot (through the large model capability) and store it to simulate the long-term memory of the brain.
You can configure "Long-term memory extraction", "Number of long-term memory recalls" and "Long-term memory synchronization".

User Profile&User Memory Point
User Profile
High level of abstraction/summary of user behavior, interests, attributes, etc. over time;
Relatively static (regularly updated)
Basic information: demographic characteristics including name, age, gender, occupation, income, marital status, etc;
Preferences: the user's interests and preferences, consumer attitudes, etc;
Geographic location: city of residence, place of birth
...
Label collection
User Memory Point
User-specific behavior, preference or contextual information recorded by the robot during interaction with the user;
It is usually short-term, specific, and closely related to the current interaction scenario;
Dynamically updated (changes with interaction).
Key events: important dates, events/experiences mentioned by the user;
Historical interaction records: content of previous dialogues between the user and the robot, operation behaviors, etc;
Individualized needs: specific needs expressed by the user in the course of using the robot's services.
...
Dialogue interaction
--User Profile Custom Configuration:Customize user profile information for the agent to remember.
Default Configuration: Default configuration of [Personal Attributes, Geographic Location, Lifestyle Habits, Psychological Characteristics] and its subordinate labels is supported by default. (The default configuration supports batch deletion and renaming of the categories, and single deletion and addition of the labels of user profiles;)
New category: Support users to add new user profile categories (up to 20 lines);

--User Memory Point Collection Prompt Configuration
Support for editing.
When the button is turned on, the user can set a specific Prompt to guide the robot to collect/organize the user's memory points;
Default prompt (set bot role)
RoleYou(Robot) are a memory point extraction tool
GoalsExtract summaries from input content, simulating how the brain records information.
Skills
Skill 1: Summary Extraction
Use third person, with "user" as the default subject.
Keep summaries between 10-300 characters.
Only summarize original content, don't fabricate information.
For user inquiries, preserve more of the consultation information.
Be objective, concise, and clear.
Skill 2: ...... ...
Constraints
Don't fabricate information.
Use third person descriptions.
User Memory Point Collection Rules:
(1) Immediate Collection: Start collecting after one Q&A round
(2)Delayed Collection: X min after no user reply, initiate collection(The upper limit is 30 minutes.)
3.1.7 Online search capability
Online search capability refers to an important function that allows Agent to automatically retrieve, filter and integrate relevant information through Internet connection to better provide users with accurate and useful answers.
After it is turned on, the big model will automatically determine whether it is online based on the user input. When it is determined to be online, it will query the relevant search engine to obtain a certain amount of web page information, and then summarize and output it to the user through the big model.
In order to meet the personalized online search needs of different robots in different regions, the interface parameter API is opened for users to configure by themselves, including "market area", "time span", "number of queried web pages", "query language and geographic location".

3.1.8 Opening Hot Questions
Opening hot questions are somewhat guiding to users. For example, in customer service consultation scenarios, there must be high-frequency questions among user questions. Opening hot questions can give users direct options, allowing users to quickly complete questions and make questions more standardized, reducing ambiguity and errors caused by user language organization.
Opening hot questions are displayed below the welcome message at the beginning of the conversation, making it easy to ask questions quickly. A maximum of 5 questions are displayed. If more than 5 questions are configured, 5 questions will be displayed randomly.


3.1.9 Smart Follow-up Questions
Support “Smart Question Enable”: You can turn on “Smart Follow-up Questions” in the Bot configuration, and the bot will automatically generate 3 related questions to guide the user to think deeply and realize good interaction with the user.


3.2 Advanced configuration
User question smart optimization: When enabled, badcases caused by unclear references in user questions will be reduced in multiple rounds of conversations.
Whether to enable Knowledge Base: After enabling, knowledge retrieval will be performed in each round of conversation, and the content recalled by knowledge retrieval will be added to the prompt. After closing, this process will not be performed.
Enable Database: The user can assemble the database in Bot Settings after successfully creating the database, and after enabling the database the robot will search the database and make data analysis based on user's questions.
Whether to use hit Q & A replies: In some scenarios, if you need to reply strictly according to the QA content of Knowledge Base, you can enable this function. After enabling, this logic will only take effect when the relevance of knowledge retrieval is greater than [0.65~ 1] and the number of knowledge that meets the threshold condition is [1~ 10]. The relevance threshold and the number of knowledge can be adjusted according to the actual effect.
Whether to enable the plugin: After turning it on, the enabled plugins can be retrieved. This process will not be performed after turning it off.
Whether to enable conversation flow: When turned on, the intended conversation flow configured by the current robot can be retrieved. When turned off, this process is not performed.
When enabled, when the knowledge relevance is within a certain threshold range, you can enter the process of converting to manual reply. Currently, there are three ways to convert to manual reply: 1) Only reply to convert to manual narrative; 2) Reply to convert to manual narrative, and the manual replies to users on the platform; 3) Reply to convert to manual narrative, and the manual replies to users through email/SMS links outside the platform.

3.3 Configure rules
Synonym Replacement: Suitable for mapping configuration between popular expressions and professional terms. If the user's question contains the configured "original word", the platform will automatically change the "original word" in the user's question to the configured "replacement word" (this function takes effect after it is enabled).
Hit keyword reply: Suitable for scenarios where you reply directly according to the specified narrative for keywords or sensitive words. If the user's question contains the configured "keywords", the platform will directly use the configured "specified reply" content, and you can choose the hit method (this function takes effect after it is turned on).
Guardrail: users can set corresponding content risk control interception requirements. When the large model outputs content, the system will detect the output content according to these specified requirements. Once it does not meet the requirements, it will be intercepted and enter a new round of generation to correct its original expression, so as to control the compliance of the output content.


Last updated