July 22, 2024


Complete Australian News World

An open source chatbot that speaks English and Chinese

An open source chatbot that speaks English and Chinese

In a few simple words

ChatGLM is a chatbot that can speak English and Chinese. It uses an artificial intelligence model that has learned a lot from texts in both these languages. ChatGLM can answer questions, translate sentences and chat with users. ChatGLM is an open source project, meaning anyone can use and improve it.


A chatbot, ChatGLM (internal alpha test version: QAGLM), designed specifically for Chinese users, Just born. Its features A Chinese-English language model with 100 billion parameters With question-and-answer and chat features.

This version, which is in internal testing, is reserved for invitations and will be gradually expanded. The researchers also published a new Chinese-English bilingual chat GLM model, ChatGLM-6B, which can be used internally on consumer graphics cards (INT4) By template measurement technology.

It follows the 100 billion parameter GLM-130B open pedestal model. On a metric scale, Only 6GB of video RAM is required. Although ChatGLM-6B with 6.2 billion parameters is smaller than 100 billion models, it greatly reduces the deployment limit for the user.

After about 1 billion Chinese-English bilingual training identifiers, it generated responses according to human preferences, complemented by supervision and refinement, feedback self-help, human feedback reinforcement learning and other technologies.


It takes the concept of ChatGLM as its starting point ChatGPT. The exclusive GLM-130B base model with 100 billion parameters is largely responsible for the increased capabilities of the current version of ChatGLM.

Unlike BERT, GPT-3, or T5, this model is an automated pretraining framework with multiple objective functions. The researchers released the 130 billion Chinese-English dense model GLM-130B 1 in August 2022 to the academic and business community.

READ  "Cancellation Culture" in the Anglo-Saxon World

ChatGLM: Benefits and Key Features

  • ChatGLM processes text in different languages ​​and has natural language generation and comprehension capabilities. He is extensively educated and very knowledgeable in many fields, which enables him to provide accurate and useful information to users.
  • It can infer relational relationships and logic between texts in response to user queries. It can learn from its users and its environment, and automatically update its models and algorithms. Many sectors benefit from this technology, including education, healthcare and banking.
  • It helps people find answers and solve problems quickly and easily. It raises awareness and drives progress in the field of artificial intelligence.

Challenges and limitations

Considered a prototype machine without feelings and conscience, Therefore, ChatGLM lacks the capacity for empathy and moral reasoning shared by humans.. It can easily be fooled or lead to wrong conclusions because knowledge depends on data and methods.

He may experience uncertainty when answering abstract or difficult problems; He may need help answering these types of questions accurately.

ChatGLM-130B and ChatGLM-6B: Revolutionary Natural Language Processing Models

Large-scale natural language processing models are increasingly used in fields ranging from education to healthcare to banking. In November 2022, Stanford University’s Big Model Center ranked the 30 most popular models worldwide, with the GLM-130B being the only model from Asia to make the list.

According to the assessment report, GLM-130B compares with GPT-3 175B (davinci). All 100-billion-level pedestal models are based on accuracy and maliciousness indicators, robustness, and calibration error.

On the other hand, ChatGLM-6B is a Chinese-English language model with 6.2 billion parameters. Designed to facilitate question-and-answer sessions in Mandarin, ChatGLM-6B uses a method similar to ChatGPT. The researchers used supervised fine-tuning, feedback bootstrap, and reinforcement learning with human input to train the model on a combined 1T corpus of Chinese and English tokens.

READ  OL gave up because Giuli doesn't speak English!

The ChatGLM-6B model is an open source multilingual version of the General Language Model (GLM) framework with approximately 6.2 billion parameters. The scaling method allows customers to use it internally on low-end graphics hardware like the 2080Ti.. Researchers have published Open source ChatGLM-6B model To facilitate the development of large-scale sampling technology in society.

Features that distinguish ChatGLM-6B

  • ChatGLM-6B is a 6.2 billion parameter multilingual language model trained on a mixture of Chinese and English content in a 1:1 ratio, allowing it to cover a wide range of languages.
  • The two-dimensional RoPE level encoding technique has been improved through the training experience of GLM-130B using conventional FFN architecture. ChatGLM-6B’s manageable parameter size allows independent tuning and deployment by academics and individual developers.
  • ChatGLM-6B requires at least 13 GB of video RAM to justify FP16 accuracy. However, this requirement can be reduced to 10 GB (INT8) and 6 GB (INT4), allowing for use in consumer graphics cards using sample size technology.
  • With a queue length of 2048, the ChatGLM-6B is more suitable for longer conversations and applications than the GLM-10B (queue length: 1024).
  • The model is trained to interpret human teaching intentions using supervised fine-tuning, feedback bootstrap, and reinforcement learning techniques from human feedback. The markup format shown is the result of this exercise.
  • These features make ChatGLM-6B a powerful and versatile language model capable of handling large amounts of multilingual data accurately and efficiently. Its ability to be used in consumer graphics cards, especially with sample size technology, provides a cost-effective solution for academics and individual developers looking to use this technology for their projects.
READ  An English goalkeeper was banned from a cup match for urinating during a match