AI & ML interests

None defined yet.

Recent Activity

blog-explorers's activity

davidberenstein1957 
posted an update 1 day ago
Reality123b 
posted an update 1 day ago
view post
Post
586
I got an issue with inference API

whatever model I choose, whatever input I give, it outputs "failed to fetch" in my laptop, pc, phone and every device, I tried different Accounts, etc. but still this error.

help me as my hf space (almost all of them) uses the API.
·
KnutJaegersberg 
posted an update 3 days ago
view post
Post
2465
A Brief Survey of Associations Between Meta-Learning and General AI

The paper titled "A Brief Survey of Associations Between Meta-Learning and General AI" explores how meta-learning techniques can contribute to the development of Artificial General Intelligence (AGI). Here are the key points summarized:

1. General AI (AGI) and Meta-Learning:
- AGI aims to develop algorithms that can handle a wide variety of tasks, similar to human intelligence. Current AI systems excel at specific tasks but struggle with generalization to unseen tasks.
- Meta-learning or "learning to learn" improves model adaptation and generalization, allowing AI systems to tackle new tasks efficiently using prior experiences.

2. Neural Network Design in Meta-Learning:
- Techniques like Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks enable self-improvement and adaptability for deep models, supporting generalization across tasks.
- Highway networks and ResNet-style models use shortcuts for efficient backpropagation, allowing deeper models that can be used in meta-learning frameworks.

3. Coevolution:
- Coevolution involves the mutual evolution of multiple components, such as learners or task-solvers, to improve overall performance.
- Coevolution between learners enhances collaboration and competition within AI systems, while coevolution between tasks and solvers (e.g., POWERPLAY and AI-GA frameworks) pushes solvers to adapt to increasingly complex tasks.

4. Curiosity in Meta-Learning:
- Curiosity-based exploration encourages AI systems to discover new, diverse features of the environment, avoiding local optima.
- Curiosity-based objectives can be combined with performance-based objectives to ensure efficient exploration and adaptation in complex tasks.

5. Forgetting Mechanisms:
- Forgetting is crucial to avoid memory overload in AI systems

https://arxiv.org/abs/2101.04283
Reality123b 
posted an update 3 days ago
view post
Post
2182
it would be nice if hf gave an option like models, datasets for users and orgs
  • 1 reply
·
KnutJaegersberg 
posted an update 4 days ago
view post
Post
1509
Artificial general intelligence through recursive data compression and grounded reasoning: a position paper


This paper proposes a system to achieve AGI through general data compression and grounded reasoning.

General Data Compression involves creating a flexible algorithm that adapts to input data to simplify and compress it recursively, identifying simple, orthogonal features to avoid redundancy. The algorithm measures AGI progress by solving problems based on increasing complexity, and it expands its search space according to the data itself. Compression is applied not only to data but also to model parameters, and sequences are segmented based on compressibility.

Grounded Reasoning refers to forming representations with various granularities, crucial for commonsense reasoning and AGI. The system simulates the real world as its model, switching between representations and maximizing resourcefulness. Key ideas include the world as its own model for reasoning and actions aimed at maximizing entropy to test hypotheses.

The paper emphasizes simplicity, data-dependent bias, recursion, orthogonality, resourcefulness, and grounding in real-world contexts as fundamental principles in building an AGI system.

https://arxiv.org/abs/1506.04366
  • 1 reply
·
Reality123b 
posted an update 4 days ago
view post
Post
248
Thank you HuggingFace to give me this feature!
I'm Really glad about it.
davidberenstein1957 
posted an update 5 days ago
Reality123b 
posted an update 6 days ago
davide221 
posted an update 6 days ago
view post
Post
705
I have just released Klarity, an open-source library that analyzes the entropy (both raw and semantic) of language model outputs.

The library uses a second model to generate json reports containing detailed analysis and insights, allowing you to better understand areas of uncertainty and decision making in the main model.
If you would like to test the library on HF models or give feedback, you are welcome!

Repo: https://github.com/klara-research/klarity
davidberenstein1957 
posted an update 6 days ago
victor 
posted an update 7 days ago
view post
Post
3733
Hey everyone, we've given https://hf.co/spaces page a fresh update!

Smart Search: Now just type what you want to do—like "make a viral meme" or "generate music"—and our search gets it.

New Categories: Check out the cool new filter bar with icons to help you pick a category fast.

Redesigned Space Cards: Reworked a bit to really show off the app descriptions, so you know what each Space does at a glance.

Random Prompt: Need ideas? Hit the dice button for a burst of inspiration.

We’d love to hear what you think—drop us some feedback plz!
·
davidberenstein1957 
posted an update 7 days ago
KnutJaegersberg 
posted an update 11 days ago
view post
Post
984
Anthropomorphic reasoning about neuromorphic AGI safety

Summary of "Anthropomorphic Reasoning About Neuromorphic AGI Safety"
This paper explores safety strategies for neuromorphic artificial general intelligence (AGI), defined as systems designed by reverse-engineering essential computations of the human brain. Key arguments and proposals include:

1. Anthropomorphic Reasoning Validity:
- Neuromorphic AGI’s design and assessment rely on human cognition models, making anthropomorphic reasoning (using human-like traits) critical for safety analysis. Comparisons to human behavior and neural mechanisms provide insights into AGI behavior and risks.

2. Countering Safety Criticisms:
- The authors challenge claims that neuromorphic AGI is inherently more dangerous than other AGI approaches. They argue all AGI systems face intractable verification challenges (e.g., real-world unpredictability, incomputable action validation). Neuromorphic AGI may even offer safety advantages by enabling comparisons to human cognitive processes.

3. Motivational Architecture:
- Basic drives (e.g., curiosity, social interaction) are essential for cognitive development and safety. These pre-conceptual, hardwired drives (analogous to human hunger or affiliation) shape learning and behavior. The orthogonality thesis (intelligence and goals as independent) is contested, as neuromorphic AGI’s drives likely intertwine with its cognitive architecture.

4. Safety Strategies:
- **Social Drives**: Embedding drives like caregiving, affiliation, and cooperation ensures AGI develops prosocial values through human interaction.
- **Bounded Reward Systems**: Human-like satiation mechanisms (e.g., diminishing rewards after fulfillment) prevent extreme behaviors (e.g., paperclip maximization).
- **Developmental Environment**: Exposure to diverse, positive human interactions and moral examples fosters

https://ccnlab.org/papers/JilkHerdReadEtAl17.pdf
ameerazam08 
posted an update 12 days ago