Hacktoberfest 2023

community
Activity Feed

AI & ML interests

None defined yet.

Hacktoberfest23's activity

julien-cย 
posted an update 2 months ago
view post
Post
9460
After some heated discussion ๐Ÿ”ฅ, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐Ÿ”ฅ

cc: @reach-vb @pierric @victor and the HF team
ยท
julien-cย 
posted an update 2 months ago
view post
Post
3170
wow ๐Ÿ˜ฎ

INTELLECT-1 is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.

PrimeIntellect/INTELLECT-1-Instruct
abidlabsย 
posted an update 5 months ago
view post
Post
5351
๐Ÿ‘‹ Hi Gradio community,

I'm excited to share that Gradio 5 will launch in October with improvements across security, performance, SEO, design (see the screenshot for Gradio 4 vs. Gradio 5), and user experience, making Gradio a mature framework for web-based ML applications.

Gradio 5 is currently in beta, so if you'd like to try it out early, please refer to the instructions below:

---------- Installation -------------

Gradio 5 depends on Python 3.10 or higher, so if you are running Gradio locally, please ensure that you have Python 3.10 or higher, or download it here: https://www.python.org/downloads/

* Locally: If you are running gradio locally, simply install the release candidate with pip install gradio --pre
* Spaces: If you would like to update an existing gradio Space to use Gradio 5, you can simply update the sdk_version to be 5.0.0b3 in the README.md file on Spaces.

In most cases, thatโ€™s all you have to do to run Gradio 5.0. If you start your Gradio application, you should see your Gradio app running, with a fresh new UI.

-----------------------------

Fore more information, please see: https://github.com/gradio-app/gradio/issues/9463
  • 2 replies
ยท
abidlabsย 
posted an update 9 months ago
view post
Post
4668
๐—ฃ๐—ฟ๐—ผ๐˜๐—ผ๐˜๐˜†๐—ฝ๐—ถ๐—ป๐—ด holds an important place in machine learning. But it has traditionally been quite difficult to go from prototype code to production-ready APIs

We're working on making that a lot easier with ๐—š๐—ฟ๐—ฎ๐—ฑ๐—ถ๐—ผ and will unveil something new on June 6th: https://www.youtube.com/watch?v=44vi31hehw4&ab_channel=HuggingFace
  • 2 replies
ยท
julien-cย 
posted an update 9 months ago
view post
Post
5203
Hey it was good meeting you yesterday @MaziyarPanahi ๐Ÿ”ฅ

thanks @mishig for setting this up

Let's make the Hub as useful as possible for the community โค๏ธ
  • 1 reply
ยท
abidlabsย 
posted an update 10 months ago
view post
Post
3630
Open Models vs. Closed APIs for Software Engineers
-----------------------------------------------------------------------

If you're an ML researcher / scientist, you probably don't need much convincing to use open models instead of closed APIs -- open models give you reproducibility and let you deeply investigate the model's behavior.

But what if you are a software engineer building products on top of LLMs? I'd argue that open models are a much better option even if you are using them as APIs. For at least 3 reasons:

1) The most obvious reason is reliability of your product. Relying on a closed API means that your product has a single point-of-failure. On the other hand, there are at least 7 different API providers that offer Llama3 70B already. As well as libraries that abstract on top of these API providers so that you can make a single request that goes to different API providers depending on availability / latency.

2) Another benefit is eventual consistency going local. If your product takes off, it will be more economical and lower latency to have a dedicated inference endpoint running on your VPC than to call external APIs. If you've started with an open-source model, you can always deploy the same model locally. You don't need to modify prompts or change any surrounding logic to get consistent behavior. Minimize your technical debt from the beginning.

3) Finally, open models give you much more flexibility. Even if you keep using APIs, you might want to tradeoff latency vs. cost, or use APIs that support batches of inputs, etc. Because different API providers have different infrastructure, you can use the API provider that makes the most sense for your product -- or you can even use multiple API providers for different users (free vs. paid) or different parts of your product (priority features vs. nice-to-haves)
julien-cย 
posted an update 10 months ago
view post
Post
6902
text-generation-inference (TGI) is now fully open-source again!

Along with text-embeddings-inference.

We just switched both of those repos' license back to Apache 2. ๐Ÿ”ฅ
abidlabsย 
posted an update 10 months ago
view post
Post
3339
Introducing the Gradio API Recorder ๐Ÿช„

Every Gradio app now includes an API recorder that lets you reconstruct your interaction in a Gradio app as code using the Python or JS clients! Our goal is to make Gradio the easiest way to build ML APIs, not just UIs ๐Ÿ”ฅ

ยท
julien-cย 
posted an update 11 months ago
view post
Post
3180
Very glad to welcome @josefprusa , pioneer of 3D printing and open source hardware, founder of https://www.prusa3d.com/, to the HF Hub ๐Ÿ‘‹

AI applied to 3D printing could be big.
  • 1 reply
ยท
julien-cย 
posted an update 11 months ago
julien-cย 
posted an update 11 months ago
view post
Post
What if you could casually access your remote GPU in HF Spaces from the comfort of your local VSCode ๐Ÿคฏ
ยท
abidlabsย 
posted an update about 1 year ago
view post
Post
Necessity is the mother of invention, and of Gradio components.

Sometimes we realize that we need a Gradio component to build a cool application and demo, so we just build it. For example, we just added a new gr.ParamViewer component because we needed it to display information about Python & JavaScript functions in our documentation.

Of course, our users should be able able to do the same thing for their machine learning applications, so that's why Gradio lets you build custom components, and publish them to the world ๐Ÿ”ฅ
abidlabsย 
posted an update about 1 year ago
view post
Post
Lots of cool Gradio custom components, but is the most generally useful one I've seen so far: insert a Modal into any Gradio app by using the modal component!

from gradio_modal import Modal

with gr.Blocks() as demo:
    gr.Markdown("### Main Page")
    gr.Textbox("lorem ipsum " * 1000, lines=10)

    with Modal(visible=True) as modal:
        gr.Markdown("# License Agreement")
abidlabsย 
posted an update about 1 year ago
view post
Post
Just out: new custom Gradio component specifically designed for code completion models ๐Ÿ”ฅ
  • 1 reply
ยท
julien-cย 
posted an update about 1 year ago
view post
Post
๐Ÿ“ฃ NEW on HF

the Dataset Viewer is now available on *private datasets* too

You need to be a PRO or a Enterprise Hub user. ๐Ÿ”ฅ

Great work from our Datasets team ๐Ÿฅฐ: @lhoestq @severo @polinaeterna @asoria @albertvillanova and the whole team ๐Ÿฅฐ
  • 1 reply
ยท
abidlabsย 
posted an update about 1 year ago
view post
Post
The next version of Gradio will be significantly more efficient (as well as a bit faster) for anyone who uses Gradio's streaming features. Looking at you chatbot developers @oobabooga @pseudotensor :)

The major change that we're making is that when you stream data, Gradio used to send the entire payload at each token. This is generally the most robust way to ensure all the data is correctly transmitted. We've now switched to sending "diffs" --> so at each time step, we automatically compute the diff between the most recent updates and then only send the latest token (or whatever the diff may be). Coupled with the fact that we are now using SSE, which is a more robust communication protocol than WS (SSE will resend packets if there's any drops), we should have the best of both worlds: efficient *and* robust streaming.

Very cool stuff @aliabid94 ! PR: https://github.com/gradio-app/gradio/pull/7102
abidlabsย 
posted an update about 1 year ago
abidlabsย 
posted an update about 1 year ago
view post
Post
Gradio 4.16 introduces a new flow: you can hide/show Tabs or make them interactive/non-interactive.

Really nice for multi-step machine learning ademos โšก๏ธ
  • 6 replies
ยท
abidlabsย 
posted an update about 1 year ago
view post
Post
โœจ Excited to release gradio 4.16. New features include:

๐Ÿปโ€โ„๏ธ Native support for Polars Dataframe
๐Ÿ–ผ๏ธ Gallery component can be used as an input
โšก Much faster streaming for low-latency chatbots
๐Ÿ“„ Auto generated docs for custom components

... and much more! This is HUGE release, so check out everything else in our changelog: https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md
ยท
abidlabsย 
posted an update about 1 year ago
view post
Post
๐—›๐—ผ๐˜„ ๐˜„๐—ฒ ๐—บ๐—ฎ๐—ฑ๐—ฒ ๐—š๐—ฟ๐—ฎ๐—ฑ๐—ถ๐—ผ ๐—ณ๐—ฎ๐˜€๐˜๐—ฒ๐—ฟ ๐—ฏ๐˜†... ๐˜€๐—น๐—ผ๐˜„๐—ถ๐—ป๐—ด ๐—ถ๐˜ ๐—ฑ๐—ผ๐˜„๐—ป!

About a month ago, @oobabooga (who built the popular text generation webui) reported an interesting issue to the Gradio team. After upgrading to Gradio 4, @oobabooga noticed that chatbots that streamed very quickly had a lag before their text would show up in the Gradio app.

After some investigation, we determined that the Gradio frontend would receive the updates from the backend immediately, but the browser would lag before rendering the changes on the screen. The main difference between Gradio 3 and Gradio 4 was that we migrated the communication protocol between the backend and frontend from Websockets (WS) to Server-Side Events (SSE), but we couldn't figure out why this would affect the browser's ability to render the streaming updates it was receiving.

After diving deep into browsers events, @aliabid94 and @pngwn made a realization: most browsers treat WS events (specifically the WebSocket.onmessage function) with a lower priority than SSE events (EventSource.onmessage function), which allowed the browser to repaint the window between WS messages. With SSE, the streaming updates would stack up in the browser's event stack and be prioritized over any browser repaint. The browser would eventually clear the stack but it would take some time to go through each update, which produced a lag.

We debated different options, but the solution that we implemented was to introduce throttling: we slowed down how frequently we would push updates to the browser event stack to a maximum rate of 20/sec. Although this seemingly โ€œslowed downโ€ Gradio streaming, it actually would allow browsers to process updates in real-time and provide a much better experience to end users of Gradio apps.

See the PR here: https://github.com/gradio-app/gradio/pull/7084

Kudos to @aliabid94 and @pngwn for the fix, and to @oobabooga and @pseudotensor for helping us test it out!
ยท