Submitted by Business-Lead2679 t3_1271po7 in MachineLearning

We introduce Vicuna-13B, an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90%* of cases. The cost of training Vicuna-13B is around $300. The training and serving code, along with an online demo, are publicly available for non-commercial use.

Training details

Vicuna is created by fine-tuning a LLaMA base model using approximately 70K user-shared conversations gathered from with public APIs. To ensure data quality, we convert the HTML back to markdown and filter out some inappropriate or low-quality samples. Additionally, we divide lengthy conversations into smaller segments that fit the model’s maximum context length.

Our training recipe builds on top of Stanford’s alpaca with the following improvements.

  • Memory Optimizations: To enable Vicuna’s understanding of long context, we expand the max context length from 512 in alpaca to 2048, which substantially increases GPU memory requirements. We tackle the memory pressure by utilizing gradient checkpointing and flash attention.
  • Multi-round conversations: We adjust the training loss to account for multi-round conversations and compute the fine-tuning loss solely on the chatbot’s output.
  • Cost Reduction via Spot Instance: The 40x larger dataset and 4x sequence length for training poses a considerable challenge in training expenses. We employ SkyPilot managed spot to reduce the cost by leveraging the cheaper spot instances with auto-recovery for preemptions and auto zone switch. This solution slashes costs for training the 7B model from $500 to around $140 and the 13B model from around $1K to $300.


Vicuna - Online demo


We have noticed that, similar to other large language models, Vicuna has certain limitations. For instance, it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs. Additionally, it has not been sufficiently optimized to guarantee safety or mitigate potential toxicity or bias. To address the safety concerns, we use the OpenAI moderation API to filter out inappropriate user inputs in our online demo. Nonetheless, we anticipate that Vicuna can serve as an open starting point for future research to tackle these limitations.

Relative Response Quality Assessed by GPT-4


For more information, check

Online demo:


All credits go to the creators of this model. I did not participate in the creation of this model nor in the fine-tuning process. Usage of this model falls under a non-commercial license.



You must log in or register to comment.

AlmightySnoo t1_jecum2v wrote

I think this sub should start enforcing the explicit mention of "NOT FREE (AS IN FREEDOM)" in the title and/or flair when people use the word "open-source" when there are restrictions in place. Yes technically there's no lie, but it's still misleading (often intentionally) since many conflate open-source with free software (proof in the comments when you have people asking about it). We should be discouraging this trend of "Smile! You should be happy I'm showing you the code, but you should only use it the way I tell you to" that OpenAI started, it's a huge regression and it feels like we're back to the dark days before the GPL.


Cherubin0 t1_jeefnfa wrote

You are confusing open source with source available. Open Source includes all the same freedoms as Free Software. Source Available has source code but comes with restrictions.


light24bulbs t1_jeeaqrs wrote

Agreed. At least this work on top of llama is apache 2.0


roselan t1_jecbcr4 wrote

Results from the demo are amazingly good for a 13b model. I'm floored!

I wonder how much memory the demo needs to run.


cathie_burry t1_jechk0t wrote

Llama is not to be used for commercial purposes, but can I use something like this to code up part of my business?


ktpr t1_jeco4so wrote

I feel like a lot of folks are missing this point. They retraining on ChatGPT output or LLaMA related output and assume they can license as MIT or some such.


phire t1_jects6y wrote

It gets a bit more complicated.

OpenAI can't actually claim copyright on the output of ChatGPT, so licensing something trained on ChatGPT output as MIT should be fine from a copyright perspective. But OpenAI do have terms and conditions that forbid using ChatGPT output to train an AI... I'm not sure how enforceable that is, especially when people put ChatGPT output all over the internet, making it near impossible to avoid in a training set.

As for retraining the LLaMA weights... presumably Facebook do hold copyright on the weights, which is extremely problematic for retraining them and relicensing them.


pasr9 t1_jecwvck wrote

Facebook do not hold copyright to the weights for the same reasons they do not hold copyrights to the output of their models. Neither the weights or output meet the threshold of copyrightablitity. Both are new works created out of a purely mechanical process that lack direct human authorship and creativity (two of the prerequisites required for copyright to apply).

For more information:


phire t1_jed57od wrote

Hang on, that guidence only covers generated outputs, not weights.

I just assumed weights would be like compiled code, which is also a fully mechanical process, but copyrightable because of the inputs.... Then again, most of the training data (by volume) going into machine learning models isn't owned by the company.


EuphoricPenguin22 t1_jedhyci wrote

Using training data without explicit permission is (probably) considered to be fair use in the United States. There are some currently active court cases relating to this exact issue here in the U.S., namely Getty Images (US), Inc. v. Stability AI, Inc. The case is still far from a decision, but it will likely be directly responsible for setting a precedent on this matter. There are a few other cases happening in other parts of the world, and depending on where you are specifically, different laws or regulations may already be in place that clarify this specific area of law. I believe there is another case against Stability AI in the UK, and I've heard that the EU was considering adding or has added an opt-out portion of the law; I'm not sure.


phire t1_jedo041 wrote

Perfect 10, Inc. v., Inc. established that it was fair use for google images to keep thumbnail sized copies of images because providing image search was transformative.

I'm not a lawyer, but thumbnails are way closer to the original than network weights, and AI image generation is arguably way more transformative than providing image search. I'd be surprised if Stability loses that suit.


pm_me_your_pay_slips t1_jee2xtt wrote

Perhaps applicable to the generated outputs of the model, but it’s not a clear case for the inputs used as training data. It could very well end up in the same situation as sampling in the music industry. Which is transformative, yet people using samples have to “clear” them by asking for permission (usually involves money).


Sopel97 t1_jefa735 wrote

"terms and conditions" means that at worst openai will restrict your access to chatgpt, no?


artsybashev t1_jefp0o2 wrote

Yes the only thing they can do is ban you from their service


pasr9 t1_jecwbqm wrote

AI output is not currently copyrightable in the US.


Jean-Porte t1_jedkyhk wrote

Are the users responsible for using a model that was badly licensed?


MentesInquisitivas t1_jecm73g wrote

Wheres the weights?


farleyknight t1_jef8a4v wrote

I had the exact same question! Just found on the GitHub page

> We plan to release the model weights by providing a version of delta weights that build on the original LLaMA weights, but we are still figuring out a proper way to do so. In this example, we demonstrate the usage of our distributed serving system using OPT models. Later, you can apply similar commands to serve Vicuna, just as shown in our demo.


Dapper_Cherry1025 t1_jecz0th wrote

Something about these distillations feels fundamentally different than when interacting with the larger models. The responses feel a lot more... I don't really know? Artificial? Weird way to phrase it, but I definitely get a sense that this method seems to be missing something fundamental, not to say that it couldn't be useful in other cases. Like, to me it is lacking some "spark" of intelligence that you can sorta see with GPT-3.5 and definitely see with GPT-4.

That being said however, more models to compare and contrast against will always be welcome! And Vicuna does seem able to produce text that is quite amazing for its size! Hell, considering where we were 2 years ago to today it'll be really exciting to see how far these approaches can go in these next couple of months/years.


EvenAtTheDoors t1_jed5rft wrote

Yeah, I know what you’re talking about. The lower parameter model output text that doesn’t truly synthesize new information in surprising ways. It’s often shallow and comes off artificial. Even though it knows a lot it seems like a sophisticated search engine rather than an actual language model.


KerfuffleV2 t1_jefkhxs wrote

> Something about these distillations feels fundamentally different than when interacting with the larger models.

It may not have anything to do with size. ChatGPT is just adding a lot of comfort-phrases into its response instead of just responding. "Hmm, this is an interesting challenge", "Let's see", etc. Some of that may be based on the system prompt, some of it may be training to specifically produce more natural sounding responses.

All "Hmm", "interesting challenge" and stuff that makes it sound like a person isn't actually adding any actual information that's relevant to answering the query though. (Also, you may be paying for those extraneous tokens.)


Dapper_Cherry1025 t1_jefywqj wrote

Well, that's probably because I specifically asked it to use an internal monologue. I think what I'm trying to say is that each part of its response does seem to flow in a logical way that I found easy to understand. Heck, when I refined my prompt down for 3.5 I was able to get it to admit that it couldn't come up with a solution when I tried to get a more complicated example.

I also find it very interesting that when chatgpt starts a sentence with something like "Yes, because..." I know right away that the answer is probably incorrect, because after it replies "Yes" it will then try to justify the yes even if it is wrong. However, if you can get it to investigate a problem like shown in the example it can actually try different things before arriving at a solution.


big_ol_tender t1_jecny45 wrote

Stop claiming fine tuned llama models as open source. It’s not open source and we can’t use them for anything real.


Alarming_Turnover578 t1_jee6v5w wrote

Isn't Llama licensed under gpl?


light24bulbs t1_jeeb0rs wrote

No, it's a non-commercial license focused on research cases.


LinuxSpinach t1_jeexz48 wrote

Only the code for initializing and training the model has been released under GPL... which leaves a substantial gap toward having anything useful. You would still have to replicate all of the training to produce weights that you can use commercially, which is a bridge too far for most individuals and small businesses.


polawiaczperel t1_jed1e9h wrote

I was playing with Llama 7b, 13b, 30b, 65b, Alpaca 30b native and lora, but this seems to be much better, and it is only 13b. Nice! Will they share the weights?


pasr9 t1_jefqoii wrote

I'm more interested in them releasing the dataset used to fine tune it.


wind_dude t1_jecbli5 wrote

What are the concerns with the release of the [shareGPT] dataset? I really hope it does get released, since it looks like shareGPT has shutdown api access, and even web access.


gmork_13 t1_jecj9vo wrote

It'll be filled with copies of people attempting weird jailbreaks haha


wind_dude t1_jedvs9b wrote

That’d actually be pretty cool to see, could train some classifiers pretty quick and pull some interesting stats on how people are using chatgpt.

Hoping someone publishes the dataset.


KerfuffleV2 t1_jecbxy7 wrote

It's based on Llama, so basically the same problem as anything based on Llama. From the repo "We plan to release the model weights by providing a version of delta weights that build on the original LLaMA weights, but we are still figuring out a proper way to do so." edit: Nevermind.

You will still probably need a way to get a hold of the original Llama weights (which isn't the hardest thing...)


wind_dude t1_jecct1i wrote

ahh, sorry, referring to the dataset pulled from shareGPT that was used for finetuning. Which shareGPT has disappeared since the media hype about google using it for BARD.


Yes, the llama weights are everywhere, including HF in converted form for hf transformers.


inglandation t1_jecxwxl wrote

What happens when we run out of camelids to name those models?


Anjz t1_jedk8g5 wrote

I gave GPT-4 a task to name 5 novel camelids and it gave me:

Guanacorn - A playful name that combines "guanaco" with "unicorn," giving it a mythical and fantastical appeal.

Vicunimbus - A name inspired by "vicuña" and "nimbus," invoking the image of a camelid that roams among the clouds.

Zephyrella - A name that combines "zephyr" (a light wind) with "llama," suggesting a graceful and ethereal creature. Dromedazzle - A name that blends "dromedary" with "dazzle," evoking a sense of wonder and awe.

Moonvicuna - A name inspired by the moon, symbolizing the beauty and serenity of the night.

Alpacalypse - A playful and imaginative name that combines "alpaca" with "apocalypse," giving it a bold and adventurous vibe.

Personally, I like Alpacalypse.

Also, I think this will replace whoever makes Pokemon names.


ASlowDanceWithDeath t1_jed1ezw wrote

Will you be making the weights available?


light24bulbs t1_jeeb4ag wrote

I think people don't know if they can legally do this under the llama license.

It's part of why the Lora approach to fine tuning is so nice, you don't have to share the original weights.


Purplekeyboard t1_jecuaja wrote

>Relative Response Quality Assessed by GPT-4

There's no way Bard is 93% as good as ChatGPT. Bard is dumb as hell, comparatively.


Art10001 t1_jecukzj wrote

A Koala model is listed on the site. What is it?


hapliniste t1_jee975h wrote

You can try it in the web demo and to me it seems better than Vicuna. I guess they'll make an anouncement soon


Everlier t1_jegtfkv wrote

I indeed hope so as well, it looks very decent


petkow t1_jednuvp wrote

A possibly naive question of mine, but are custom "fine tuned" models, - similar to that in this post - the only way for instruction following LLMs to ingest (larger scale) new knowledge from sources which were not included in the original training set?

Let's say for example - I want to summarize some larger scientific article or a larger book (above 50-100 pages) or multiple user interview transcripts for a corporate use-case with an LLM that has a similar response quality to that of GPT-4. Due to token limitations, these can not be put into the prompt directly, if I am correct. The new ChatGPT plugins (I still do not have access to it), will not solve that either, as they can only query some external knowledge source (retrieval plugin, web plugin), but this will just result in a keyword based query and an already truncated query result ingested into the prompt. So summarizing one new comprehensive corpus beyond the token limits needs a new model trained with that added corpus into the training set? Can you provide recommendations for that, what is the most efficient way?


UseNew5079 t1_jecefwx wrote

Excellent quality responses from this model. This can be actually usable.


BoiElroy t1_jecx0mw wrote

Is it "open source" though? ...

If anyone knows, I'd be curious also if you took a model that was not open source and then fine tuned it but unfreezing the weights of some intermediate layers, will it just always be not open source because of the initial state?


yehiaserag t1_jed4dee wrote

I'm lost, it says open-source... and I can't see any mentioning of the weights, a download link or a huggingface repo.

On the website it says "We plan to release the model weights by providing a version of delta weights that build on the original LLaMA"

Please no lora for that, lora is always associated with degraded inference quality.


anothererrta t1_jedvpu5 wrote

If you read the blog post, you will actually see the weights mentioned.


gliptic t1_jee0fbk wrote

Delta weights doesn't mean LoRA. It's just the difference (e.g. XOR) of their new weights and the original weights.


light24bulbs t1_jeeb9cx wrote

Nice way to get around the license problem.

Is Lora really associated with a quality loss? I thought it worked pretty well.


yehiaserag t1_jegqni6 wrote

There are lots of comparisons that show this, this is why ppl created alpaca native, to reach the quality described in the original paper


hangerguardian t1_jefcq6e wrote

I don't know if you can call it an open source model without releasing the model...


benfavre t1_jedx7pb wrote

It's a pity that neither weights nor training data are made available.


ninjasaid13 t1_jegy25s wrote

Everybody rereleasing llama models with a different name and license.


gmork_13 t1_jec8rzf wrote


quick question: is ChatGPT assumed to be 100% on that chart, or has it been rated to be 100% without knowing it is rating itself? I'm assuming ChatGPT is GPT-4.


MentesInquisitivas t1_jeclydw wrote

ChatGPT is 3.5, they define that as having 100% and rate the rest accordingly. GPT-4 is only doing the evaluation.


Disastrous_Elk_6375 t1_jedpefl wrote

So I re-did some of the prompts I tried on the Alpaca demo that was live for a brief period of time (hosted by the og team, I think 7b):


P: I want you to act as a fallacy finder. You will be on the lookout for invalid arguments so you can call out any logical errors or inconsistencies that may be present in statements and discourse. Your job is to provide evidence-based feedback and point out any fallacies, faulty reasoning, false assumptions, or incorrect conclusions which may have been overlooked by the speaker or writer. My first suggestion request is "This shampoo is excellent because Cristiano Ronaldo used it in the advertisement."

A(lpaca): This statement is an example of the celebrity endorsement fallacy. Cristiano Ronaldo's endorsement of this product does not necessarily make it excellent; there could be other factors that have not been considered which may make it not-so-excellent. Without further evidence, it is not possible to draw a conclusion as to whether the shampoo is excellent or not.

V(icuna): This statement contains a logical error known as the ad hominem fallacy, which is a type of argument that attacks the person making the argument rather than addressing the argument itself. The statement is fallacious because the effectiveness of the shampoo does not depend on the endorsement of Cristiano Ronaldo, and the statement does not provide any evidence to support the claim that the shampoo is excellent.

A more appropriate argument would be: "This shampoo is excellent because it has been tested and proven to be effective, and Cristiano Ronaldo happens to use it."

P: I want you to act as an IT Architect. I will provide some details about the functionality of an application or other digital product, and it will be your job to come up with ways to integrate it into the IT landscape. This could involve analyzing business requirements, performing a gap analysis and mapping the functionality of the new system to the existing IT landscape. Next steps are to create a solution design, a physical network blueprint, definition of interfaces for system integration and a blueprint for the deployment environment. My first request is "I need help to integrate a CMS system."

A: To integrate a CMS system, it is first important to understand the business requirements. A gap analysis should then be conducted to assess the current IT landscape and identify any missing components or dependencies. A solution design should then be created which outlines the components of the CMS system and how they fit into the IT landscape. A network blueprint should also be created which outlines the physical connectivity of all the components, as well as the definition of interfaces for system integration. Finally, a deployment environment blueprint should be created which outlines the steps required for the successful implementation of the CMS system.

V: Sure, I can help you with integrating a CMS system into your IT landscape.

To start, let's understand the business requirements for the CMS system. What are the key functionality and features that the CMS needs to provide? Who are the key stakeholders that will be using the system and what are their specific needs? Once we have a clear understanding of the business requirements, we can perform a gap analysis to identify any gaps between the current IT landscape and the desired functionality of the CMS system.

Next, we can map the functionality of the CMS system to the existing IT landscape, identifying any potential integration points or areas of overlap. This will help us to identify any changes or modifications that need to be made to existing systems in order to accommodate the new CMS system.

Once we have a clear understanding of the business requirements, gap analysis, and integration points, we can begin to design a solution architecture for the CMS system. This will involve selecting the appropriate technology stack, defining the system components, and mapping out the interfaces and data flows between the CMS system and other systems in the IT landscape.

Next, we can create a physical network blueprint, including firewall rules, load balancing, and security measures to ensure that the CMS system is secure and accessible to the appropriate users.

Finally, we can define the deployment environment, including the hardware and software requirements, and create a deployment plan to ensure a smooth and successful integration of the CMS system into the IT landscape.

P: I want you to act as a journalist. You will report on breaking news, write feature stories and opinion pieces, develop research techniques for verifying information and uncovering sources, adhere to journalistic ethics, and deliver accurate reporting using your own distinct style. My first suggestion request is "I need help writing an article about air pollution in major cities around the world."

A: Cities around the world are dealing with a major air pollution crisis. In some places, air quality is so poor that it is threatening the health of citizens and damaging the environment. According to a 2018 report published by the World Health Organization, more than 90% of people in the world live in cities and towns where air quality fails to meet the organization's guidelines. The report further stated that air pollution is the leading environmental cause of premature death worldwide.

In some countries, such as China, India, and Mexico, air pollution has become so severe that it is beginning to affect the economy. Businesses are feeling the pinch due to employees being unable to work, school closures, and reduced tourism.

In order to combat air pollution, governments have implemented various measures such as banning the burning of fossil fuels, imposing regulations on industrial activities, and promoting electric vehicles and renewable energy sources. However, these efforts have had limited success in reversing the trend of rising pollution levels.

In conclusion, air pollution is a major problem in many cities around the world and it is having a major impact on the health and economy of those affected. Governments have responded with various measures, but further action is needed to reduce air pollution to safe levels.

V: Air pollution is a major problem in many cities around the world, affecting the health and well-being of millions of people. The World Health Organization (WHO) estimates that exposure to air pollution causes more than 3 million premature deaths each year, with urban areas disproportionately affected.

The main sources of air pollution in cities include industrial and vehicle emissions, as well as domestic burning of solid fuels. These pollutants can cause a range of health problems, including respiratory diseases, heart disease, and cancer.

Some of the most polluted cities in the world include New Delhi, India; Beijing, China; and Cairo, Egypt. In these cities, air pollution levels often exceed safe limits by many times, posing a serious threat to the health of residents.

To combat air pollution, cities are implementing a variety of measures, such as promoting public transportation, encouraging the use of electric vehicles, and regulating industrial and domestic emissions. In addition, many cities are investing in clean energy technologies, such as solar and wind power, to reduce their carbon footprint and minimize the impact of air pollution.

While progress is being made in many cities, much more needs to be done to address the issue of air pollution. It is important for city governments, as well as individuals, to take action to reduce emissions and protect the air we breathe. By working together, we can create cleaner, healthier cities for future generations.


Disastrous_Elk_6375 t1_jedpete wrote


P: I want you to act as a social media manager. You will be responsible for developing and executing campaigns across all relevant platforms, engage with the audience by responding to questions and comments, monitor conversations through community management tools, use analytics to measure success, create engaging content and update regularly. My first suggestion request is "I need help managing the presence of an organization on Twitter in order to increase brand awareness."

A: As a social media manager, I suggest using Twitter's analytics tools to monitor the performance of your tweets and identify what type of content is performing the best. You can also use hashtags to target specific audiences and monitor conversations related to your brand. Additionally, you can use Twitter's Ads tools to reach out to new audiences and target existing followers. Regularly engaging with users and responding to their queries is also important for building relationships with them, as well as creating an online community around the brand.

V: Sure, I'd be happy to help you manage your organization's presence on Twitter! Here are some steps I recommend taking to increase brand awareness:

  1. Develop a content strategy: Create a plan for the types of content you want to post, including tweets, images, and videos. Consider what will be most interesting and engaging to your target audience, and make sure to vary the types of content you post.

  2. Use relevant hashtags: Research and use relevant hashtags to help your tweets reach a wider audience. Be sure to also create a branded hashtag for your organization.

  3. Engage with your audience: Respond to comments and messages, and engage with your followers by sharing their content and thanking them for their support.

  4. Monitor conversations: Use community management tools to monitor conversations related to your industry and brand. This will help you stay on top of industry trends and identify any potential issues.

  5. Use analytics: Regularly check your Twitter analytics to see what's working and what's not. Use this data to improve your content strategy and adjust your approach as needed.

  6. Create engaging content: Regularly post interesting and engaging content that will appeal to your target audience. Use images and videos to make your tweets stand out, and consider using Twitter polls to create engagement.

  7. Collaborate with influencers: Collaborate with influencers in your industry to help spread the word about your brand and increase your reach.

  8. Run promotions and contests: Consider running promotions and contests to encourage engagement and increase brand awareness.

By following these steps and staying active on Twitter, you can increase brand awareness and build a strong presence on the platform.

P: I want you to act as a title generator for written pieces. I will provide you with the topic and key words of an article, and you will generate five attention-grabbing titles. Please keep the title concise and under 20 words, and ensure that the meaning is maintained. Replies will utilize the language type of the topic. My first topic is "LearnData, a knowledge base built on VuePress, in which I integrated all of my notes and articles, making it easy for me to use and share."


  1. Exploring LearnData: A Comprehensive Guide

  2. Grow Your Knowledge with LearnData

  3. Leverage LearnData for Easy Reference

  4. Share Your Wisdom with LearnData

  5. Harness the Power of LearnData


  1. "LearnData: A Comprehensive Knowledge Base Built on VuePress"

  2. "Effortlessly Organize and Share Your Notes and Articles with LearnData"

  3. "Building a Custom Knowledge Base with LearnData: A VuePress Tutorial"

  4. "Maximize Learning Efficiency with LearnData, a Personalized Knowledge Base"

  5. "Discover the Power of a Personal Knowledge Base with LearnData"

Pretty good. Some of the responses seem better than Alpaca, while on the shampoo one it got it wrong in the first part, but the second part was better.


HerrMozart1 t1_jee26ot wrote

Very impressive results! Any chance to get access to the weights for research purposes?


hapliniste t1_jee3gvr wrote

I tried some things in the web demo and it is really good.

What people haven't realised yet is that Koala (another model they did not publish about for now) is also available in the web demo and it is CRAZY GOOD! It's also really fast because I guess I'm the only one using it right now haha.

I really recommand to try it, it looks like Vicuna is a bit bellow GPT3.5 and Koala a bit above but I did not test it enough to be sure right now.


a_beautiful_rhind t1_jee547c wrote

512 context? I used alpaca-native and even llama + alpaca lora on long 2048 context. It worked fine.

>We plan to release the model weights by providing a version of delta weights that build on the original LLaMA weights, but we are still figuring out a proper way to do so.

This is where the weights currently "are".

Also.. do 30b next!


And "as a language model" replies.. including about me saying that I want to download the weights. Model says it can't be downloaded and has no "physical form". Ayy lmao.

Please stop training "openAI-isms" into models.


Liverpool67 t1_jee58qc wrote

Tested the demo but looks like this model is worse than base model with logical questions. Maybe the model useful for basic questions.

Update: Tested with basic questions, still worse than base model (13b)


HerrMozart1 t1_jeehnnv wrote

Can you give an example?


Liverpool67 t1_jeek8a2 wrote

Sure here some examples:

Q: Using this sequenve PEACH is to: HCAEP. Create sequence from 46251 is to:

A: I'm sorry, but I cannot create a sequence from the number 46251. Can youplease provide me with more information or context regarding thesequence you are referring to?

Q: put the numbers in reverse order: 46251

A: I'm sorry, but I still don't understand what you mean by "put thenumbers in reverse order." Can you please clarify your question?

Q: Who was awarded the FIFA World Player of the Year in 1995?

A: The FIFA World Player of the Year award was not awarded in 1995. Theaward was first introduced in 1991 and has been awarded annually sincethen. In 1995, the FIFA World Player of the Year award was not given dueto a conflict with the Ballon d'Or award, which was also given to aplayer annually. The FIFA World Player of the Year award wasreintroduced in 2001 and has been awarded every year since then.

Q: John, Mike and Steve bought books: two with pictures, one without pictures. John and Mike bought different books. What books did Steve have?

A: It is not possible to determine what books Steve had based on theinformation given. We know that John and Mike bought different books,but we do not know what books they bought or whether Steve's book hadpictures. It is possible that Steve's book had pictures, but it is alsopossible that Steve's book did not have pictures. Without moreinformation, it is not possible to determine what books Steve had.


anders987 t1_jeev840 wrote

So now we have Llama, Alpaca, and Vicuna. Who will be the first to claim Guanaco?


Snohoe1 t1_jef1g8b wrote

Hope they release the weights before facebook decides to dcma or something.... We need to break openais monopoly sooner than later.


Art10001 t1_jectiy3 wrote

Soon: Huemul, Pudu, Condor.


ATX_Analytics t1_jed1xnu wrote

It’s as good as bard but way off from ChatGPT. Pretty neat though


biggieshiba t1_jednwqo wrote

So how much a100 did it take to train?


ZetaReticullan t1_jedxxww wrote

I said it before, and I'll say it again: this is a WILD time to be alive.


bacocololo t1_jefgty4 wrote

I don t understand if art made by an ia is free so chatgpt let s say it art :)…


Rei1003 t1_jecdo3h wrote

What's the point of these 10b models? I think now it seems more reasonable to work with 100b models (api) or 1b models.


Business-Lead2679 OP t1_jecfagu wrote

The main point of these open-source 10b models is to make them fit on an average consumer hardware, while still providing great performance, even offline. A 100b model is hard to train because of it's size, and even harder to maintain on a server that is powerful enough to handle multiple requests at the same time, while providing good response generation speed. Not to mention how expensive this can be to run. When it comes to 1b models, they usually do not achieve a good performance, as they do not have enough data. Some models with this size are good, yes, but a 10b model is usually significantly better, if trained correctly, and can still fit on a consumer hardware.