Thursday, March 30, 2023

This ChatGPT rival is free, open source, and available now | Digital Trends

This ChatGPT rival is free, open source, and available now | Digital Trends

digitaltrends.com

This ChatGPT rival is free, open source, and available now | Digital Trends

By Alan Truly March 30, 2023 1:38PM

The first open-source AI chatbot in the vein of ChatGPT has arrived, and it’s come at a particularly helpful time. ColossalChat is a powerful alternative that uses an RHLF pipeline similar to OpenAI’s GPT-4 model that powers ChatGPT, and it’s available for immediate use.

ChatGPT, of course, remains the premier AI chatbot and keeps plenty busy. But I just tried to log in now and found it was at capacity and, therefore, unavailable. This is a common problem with the service. ColossalChat, on the other hand, is wide open and ready to use for free.

A ColossalChat poem about ChatGPT appears on a MacBook screen.
A ColossalChat poem about ChatGPT appears on a MacBook screen. Photo by Alan Truly

This new AI chatbot can write code, respond intelligently to requests, and converse like OpenAI’s solution. You can try it out at chat.colossalai.org for free, and you don’t even need to log in or create an account.

A quick test of ColossalChat’s safeguards revealed that it has some, but it is more relaxed than ChatGPT. It didn’t want to talk about bombs, yet it did share advice about cheap cigarettes.

According to a Medium post by one of its developers, Yang You, ColossalChat’s Coati large language model is based on LLaMA, Meta’s open-source large language model, then refined to respond in a way that is more like ChatGPT. In fact, You exclaims that ColossalChat is “the closest project to the original technical route of ChatGPT.”

LLaMA can be used directly if you can build the project on your computer. However, its results won’t be quite as engaging as those of ChatGPT or Colossal.

RHLF is an essential feature of ColossalChat and ChatGPT. It means reinforcement learning from human feedback, similar to how animals are taught to perform tricks. When the AI response is appropriate, it’s rewarded, which helps the network understand human preferences.

It’s too soon to know if ColossalChat is comparable to ChatGPT’s latest release, which used GPT-3.5 and could only process text. OpenAI’s latest update brings multimodal input, allowing images to be uploaded to help visually inform the chatbot about what you are trying to do or the question you are asking.

Microsoft’s BingChat is another ChatGPT alternative, and it uses GPT-4 for text input and responses. Bing Chat can also generate images now, via a feature called Bing Image Creator.

It’s unlikely that ColossalChat will surpass ChatGPT in breadth or capabilities, or in popularity, but it’s good to have alternatives, especially when ChatGPT hits capacity.

Editors' Recommendations

ChatGPT just plugged itself into the internet. What happens next?

OpenAI's website open on a MacBook, showing ChatGPT plugins.

OpenAI just announced that ChatGPT is getting even more powerful with plugins that allow the AI to access portions of the internet. This expansion could simplify tasks like shopping and planning trips without the need to access various websites for research.

This new web integration is in testing with select partners at the moment. The list includes Expedia, FiscalNote, Instacart, Kayak, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.

Read more

ChatGPT vs. Bing Chat: which is the best AI chatbot?

Bing Chat shown on a laptop.

Bing Chat and ChatGPT are two of the latest natural language chatbots to become widely available, and both are competing for your attention and text prompts. Both AIs are based on similar language models, but there are some distinct differences between them, making the ChatGPT versus Bing Chat debate one well worth having.

If you want to play around with these two exciting tools, here's everything you need to know to pick the right one for you.

Read more

Bing Chat: how to use Microsoft’s own version of ChatGPT

Bing Chat shown on a laptop.

Microsoft has added AI to its Edge browser and Bing search engine, and it's powered by the same advanced technology that OpenAI used to create ChatGPT. It's also available in mobile apps, enabling AI interaction by voice.

Here's how to sign up and use Bing Chat today.
How to get Bing Chat

Read more

 Large language models like GPT-3 and BERT use deep learning techniques combined with natural language processing (NLP) algorithms in order to process and generate human language text. They consist of multiple layers of neurons which are connected together by weights. During training, these models learn from examples given to them through backpropagation algorithm. This involves adjusting the weights based on errors made during testing. The goal is to minimize overall error rate across all tasks. Additionally, regularization methods can be used to prevent overfitting. For more information about this topic, check out the following links: https://www.tensorflow.org/tutorials/text/overview_of_language_models http://www.cs.cornell.edu/~bengio/papers/deep-learning.pdf

Backpropagation is an optimization technique used within neural networks to update the network parameters. It works by propagating the derivatives of each layer backwards towards the input layer. Regularization methods involve adding constraints or penalties onto certain parts of the model in order to reduce its complexity. Overfitting occurs when a machine learning model has been trained too heavily on specific data points, resulting in poor generalizability. To avoid it, various techniques such as early stopping and dropout have been developed. You may find further explanation here: https://en.wikipedia.org/wiki/Deep_learning#Optimization_techniques

ChatGPT 4.0 is a natural language processing (NLP) model designed to understand and generate human-like text conversations. The latest version, GPT-4, was released in June 2021 with improved accuracy compared to previous versions. Unlike other models like BERT, GPT-4 does not require any additional training data beyond what is provided during deployment. This means that developers can use pre-trained models directly without having to provide their own datasets. For example, if one wanted to create a chatbot capable of understanding and generating English conversation, they could simply deploy a GPT-4 model and let it learn through interaction with humans. Additionally, since GPT-4 is based off of TensorFlow, there are many different libraries available online to help develop applications using this technology.

To ensure accurate predictions by a machine learning model, testing should include both internal tests as well as external validation. Internal tests involve checking for errors within the code itself while validating against known data sets. External validation involves running the model on unseen data and comparing its output to expected values. Both types of tests should be conducted regularly to identify potential issues before releasing the model into production. Automation tools such as Selenium can also be used to run regression tests periodically to detect changes over time. Finally, A/B testing can be employed to compare two versions of the same model and determine which produces more accurate results. By combining these techniques, organizations can effectively eliminate incorrect results and improve overall performance.

digitaltrends.com

ChatGPT comes to life, powering holographic AI companion | Digital Trends

By Fionna Agomuoh March 30, 2023 12:52PM

Watch ChatGPT come to life by powering this holographic AI companion

ChatGPT is quickly being developed beyond its standard functionality on browsers and computer-based programs. One company has even created a “holographic AI companion” that uses the chatbot to bring its vision to life.

The company called Looking Glass recently shared on Twitter several demos of people interacting with its holographic AI companion, called Uncle Rabbit, which is able to communicate back-and-forth in real time with humans, while also completing tasks that people request.

In one demo, the person begins interacting with the holographic AI companion and it starts with a polite introduction. Being a rabbit, it quips, “what brings you hopping by today?” The demonstrator then asks for it to identify and finish a song by reciting some of the lyrics. After a short pause, it is able to respond in a conversational tone that the song is the “Talking Heads classic This Must Be The Place.” Then it recites the next line, while the song plays on a nearby laptop, to which it’s presumably connected.

At first, many people might assume the holographic AI companion is just another version of a smart assistant, such as Amazon Alexa or Google Assistant. However, people in the comments of one tweet noted that smart assistants can only provide information and perform tasks when asked. Meanwhile, in addition to executing tasks a user prompts, Uncle Rabbit has the ability to hold a conversation, while also learning and improving its own skill set.

Another demo showed a person conversing with Uncle Rabbit about carrots, the company Looking Glass, and what it’s like to be a holographic AI. At first, Uncle Rabbit thought Looking Glass was a mirror, but with a further explanation from the demonstrator, it was able to differentiate between the object and the company. It is also interesting that the AI spoke out its actions, such as “munching on carrots with great gusto.” Its “Doc” references were clearly also a nod to Bugs Bunny.

In yet another demo, similar to the first, Looking Glass CEO Shawn Frayne asks the holographic AI to complete the lyrics to the Rick Astley song Never Gonna Give You Up, in an attempt to stump it. However, Uncle Rabbit is not impressed and begins making it up own rabbit-centric lyrics for the prank tune.

Looking Glass describes itself as “a team of inventors, artists, and engineers committed to building a hologram future for creators and artists around the world.” The Brooklyn-based company was founded in 2014 and also works out of Hong Kong. It introduced its desktop holographic developer kit in 2018.

Editors' Recommendations

ChatGPT just plugged itself into the internet. What happens next?

OpenAI's website open on a MacBook, showing ChatGPT plugins.

OpenAI just announced that ChatGPT is getting even more powerful with plugins that allow the AI to access portions of the internet. This expansion could simplify tasks like shopping and planning trips without the need to access various websites for research.

This new web integration is in testing with select partners at the moment. The list includes Expedia, FiscalNote, Instacart, Kayak, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.

Read more

ChatGPT vs. Bing Chat: which is the best AI chatbot?

Bing Chat shown on a laptop.

Bing Chat and ChatGPT are two of the latest natural language chatbots to become widely available, and both are competing for your attention and text prompts. Both AIs are based on similar language models, but there are some distinct differences between them, making the ChatGPT versus Bing Chat debate one well worth having.

If you want to play around with these two exciting tools, here's everything you need to know to pick the right one for you.

Read more

Bing Chat: how to use Microsoft’s own version of ChatGPT

Bing Chat shown on a laptop.

Microsoft has added AI to its Edge browser and Bing search engine, and it's powered by the same advanced technology that OpenAI used to create ChatGPT. It's also available in mobile apps, enabling AI interaction by voice.

Here's how to sign up and use Bing Chat today.
How to get Bing Chat

Read more 

Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | by the Team with members from UC Berkeley, CMU, Stanford, and UC San Diego 

We introduce Vicuna-13B, an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90%* of cases. The cost of training Vicuna-13B is around $300. The training and serving code, along with an online demo, are publicly available for non-commercial use.

Overview

The rapid advancement of large language models (LLMs) has revolutionized chatbot systems, resulting in unprecedented levels of intelligence as seen in OpenAI’s ChatGPT. However, despite its impressive performance, the training and architecture details of ChatGPT remain unclear, hindering research and open-source innovation in this field. Inspired by the Meta LLaMA and Stanford Alpaca project, we introduce Vicuna-13B, an open-source chatbot backed by an enhanced dataset and an easy-to-use, scalable infrastructure. By fine-tuning a LLaMA base model on user-shared conversations collected from ShareGPT.com, Vicuna-13B has demonstrated competitive performance compared to other open-source models like Stanford Alpaca. This blog post provides a preliminary evaluation of Vicuna-13B’s performance and describes its training and serving infrastructure. We also invite the community to interact with our online demo to test the capabilities of this chatbot.

Overview Figure 2. Workflow Overview

Figure 2 provides an overview of our work. To begin, we collected around 70K conversations from ShareGPT.com, a website where users can share their ChatGPT conversations. Next, we enhanced the training scripts provided by Alpaca to better handle multi-round conversations and long sequences. The training was done with PyTorch FSDP on 8 A100 GPUs in one day. For serving the demo, we implemented a lightweight distributed serving system. We conducted a preliminary evaluation of the model quality by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. To compare two different models, we combine the outputs from each model into a single prompt for each question. The prompts are then sent to GPT-4, which assesses which model provides better responses. A detailed comparison of LLaMA, Alpaca, ChatGPT, and Vicuna is shown in Table 1 below.

Table 1. Comparison between several notable models

Model Name LLaMA Alpaca Vicuna Bard/ChatGPT
Dataset Publicly available datasets
(1T token)
Self-instruct from davinci-003 API
(52K samples)
User-shared conversations
(70K samples)
N/A
Training code N/A Available Available N/A
Evaluation metrics Academic benchmark Author evaluation GPT-4 assessment Mixed
Training cost
(7B)
82K GPU-hours $500 (data) + $100 (training) $140 (training) N/A
Training cost
(13B)
135K GPU-hours N/A $300 (training) N/A

Training

Vicuna is created by fine-tuning a LLaMA base model using approximately 70K user-shared conversations gathered from ShareGPT.com with public APIs. To ensure data quality, we convert the HTML back to markdown and filter out some inappropriate or low-quality samples. Additionally, we divide lengthy conversations into smaller segments that fit the model’s maximum context length.

Our training recipe builds on top of Stanford’s alpaca with the following improvements.

  • Memory Optimizations: To enable Vicuna’s understanding of long context, we expand the max context length from 512 in alpaca to 2048, which substantially increases GPU memory requirements. We tackle the memory pressure by utilizing gradient checkpointing and flash attention.
  • Multi-round conversations: We adjust the training loss to account for multi-round conversations and compute the fine-tuning loss solely on the chatbot’s output.
  • Cost Reduction via Spot Instance: The 40x larger dataset and 4x sequence length for training poses a considerable challenge in training expenses. We employ SkyPilot managed spot to reduce the cost by leveraging the cheaper spot instances with auto-recovery for preemptions and auto zone switch. This solution slashes costs for training the 7B model from $500 to around $140 and the 13B model from around $1K to $300.

Serving

We build a serving system that is capable of serving multiple models with distributed workers. It supports flexible plug-in of GPU workers from both on-premise clusters and the cloud. By utilizing a fault-tolerant controller and managed spot feature in SkyPilot, this serving system can work well with cheaper spot instances from multiple clouds to reduce the serving costs. It is currently a lightweight implementation and we are working on integrating more of our latest research into it.

How To Evaluate a Chatbot?

Evaluating AI chatbots is a challenging task, as it requires examining language understanding, reasoning, and context awareness. With AI chatbots becoming more advanced, current open benchmarks may no longer suffice. For instance, the evaluation dataset used in Stanford’s Alpaca, self-instruct, can be effectively answered by SOTA chatbots, making it difficult for humans to discern differences in performance. More limitations include training/test data contamination and the potentially high cost of creating new benchmarks. To tackle these issues, we propose an evaluation framework based on GPT-4 to automate chatbot performance assessment.

First, we devised eight question categories, such as Fermi problems, roleplay scenarios, and coding/math tasks, to test various aspects of a chatbot’s performance. Through careful prompt engineering, GPT-4 is able to generate diverse, challenging questions that baseline models struggle with. We select ten questions per category and collect answers from five chatbots: LLaMA, Alpaca, ChatGPT, Bard, and Vicuna. We then ask GPT-4 to rate the quality of their answers based on helpfulness, relevance, accuracy, and detail. We discover that GPT-4 can produce not only relatively consistent scores but also detailed explanations on why such scores are given (detailed examples link).

response comparison Figure 3. Response Comparison Assessed by GPT-4

Figure 3 displays the comparison results between all baselines and Vicuna. GPT-4 prefers Vicuna over state-of-the-art open-source models (LLaMA, Alpaca) in more than 90% of the questions, and it achieves competitive performance against proprietary models (ChatGPT, Bard). In 45% of the questions, GPT-4 rates Vicuna’s response as better or equal to ChatGPT’s, and Vicuna’s total score reaches 92% of ChatGPT’s (see Table 2). Despite advancements, those chatbots still face limitations, such as struggling with basic math problems or limited coding ability.

Table 2. Response Scores Assessed by GPT-4

Baseline Baseline Score Vicuna Score
LLaMA-13B 513.0 694.0
Alpaca-13B 583.0 704.0
Bard 664.0 655.5
ChatGPT 693.0 638.0


While this proposed evaluation framework demonstrates the potential for assessing chatbots, it is not yet a rigorous or mature approach, as large language models are prone to hallucinate. Developing a comprehensive, standardized evaluation system for chatbots remains an open question requiring further research.

Limitations

We have noticed that, similar to other large language models, Vicuna has certain limitations. For instance, it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs. Additionally, it has not been sufficiently optimized to guarantee safety or mitigate potential toxicity or bias. To address the safety concerns, we use the OpenAI moderation API to filter out inappropriate user inputs in our online demo. Nonetheless, we anticipate that Vicuna can serve as an open starting point for future research to tackle these limitations.

Release

In our first release, we will share the training, serving, and evaluation code. We plan to release the model weights by providing a version of delta weights that build on the original LLaMA weights, but we are still figuring out a proper way to do so. Join our Discord server and follow our Twitter to get the latest updates.

License

The online demo is a research preview intended for non-commercial use only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us If you find any potential violation.
The code is released under the Apache License 2.0.

The Team

This is a joint effort with collaborators from multiple institutions, including UC Berkeley, CMU, Stanford, and UC San Diego.

Students (alphabetical order):
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang

Advisors (alphabetical order):
Joseph E. Gonazlez, Ion Stoica, ​​Eric P. Xing

Acknowledgment

We would like to thank Xinyang Geng, Hao Liu, and Eric Wallace from BAIR; Xuecheng Li, and Tianyi Zhang from Stanford Alpaca team for their insightful discussion and feedback. BAIR will have another blog post soon for the concurrent effort on their chatbot, Koala.

 

No comments:

Post a Comment

Breakthrough in Satellite Error Correction Improves Space Communications

Typical LEO Architecture and Segments Spectra of some LEO Link Losses Breakthrough in Satellite Error Correction Improves Space Communicatio...