Spaces:
Running
Your feedback on HuggingChat
Any constructive feedback is welcome here. Just use the "New Discussion" button! or this link.
^^ pin it? :)
HuggingChat can only speak poor Chinese. I told him, 'Let's speak in Chinese.' He said, 'Sure,' but then continued to speak in English or with incorrect pinyin. But this is an interesting project.
Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.
Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.
Yes, I answered your post 👍
this need more prompting than Google's bard or Chatgpt like they understand quickly what I need and also the feel that you are chatting with machine is still there
Sometimes there is no response. Most of the time it finishes half way or less through and answer. I am using it to program dotnet core.
we're having an outage on the API side, will report back when it's fixed! @SayakaMatsuoka @MadderHatterMax
looks back up for now!
😁It's working now, thank you. @nsarrazin
This is one of the weakest chatbots I've seen with DeepSeek R1, it makes spelling mistakes, is unclear in its remarks, it goes into loops
[fetch failed]occured again🫨
@SayakaMatsuoka should be better now
This is one of the weakest chatbots I've seen with DeepSeek R1, it makes spelling mistakes, is unclear in its remarks, it goes into loops
i know, i experienced it my self. i'll copy the chatlogs if i could, later.
btw it actually could be fixed. if HuggingChat team decide to use llama3 model instead of qween model. 670b original version is preferred and reccomended. but i think it will get costly on HF server side pretty quick. especially if they host it on AWS