Apply for community grant: Personal project (gpu)

#1
by m1k3wn - opened

I have built my first ever fine-tuned models as a learning/creative project. They are fine-tuned from flan-T5-base models to interpret users dreams. And give reasonably believable responses, one mystical/jungian model, one balanced model. ie:
input: "Interpret this dream: I had a dream I was coding all night and it almost worked.”
generated_text: "The coding dream suggests that you’re trying to build something new, but it’s still not working. This dream encourages you to focus on the long-term goals of your life and try to find ways to keep things moving forward. It invites you to consider whether there’s room for improvement in what you’ve been striving for. Mystically, this dream may be encouraging you to think about how much work needs to be done before you're ready to move forward. Psychologically, this dream could reflect feelings of self-awareness or creativity can help you feel more confident in your ability to make changes without having to go overboard",
selected_model: 'nidra-v1’
I have deployed them on a HF space (thank you this resource is amazing). but they are maxing out the CPU useage and requests time out and truncate responses. I’d really love to be able to host this properly, even just for a short while, to test out what I’ve built. Really excited by this community and all the people doing wild stuff on here.

Sign up or log in to comment