Skip Navigation

Search

Wiki | AI Tools

fmhy.net Wiki | AI Tools

Wiki for AI Tools

Wiki | AI Tools
0

GitHub - neuml/txtai: đź’ˇ All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows

github.com GitHub - neuml/txtai: đź’ˇ All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows

đź’ˇ All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows - neuml/txtai

GitHub - neuml/txtai: đź’ˇ All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
0

GitHub - gyopak/sidellama

github.com GitHub - gyopak/sidellama

Contribute to gyopak/sidellama development by creating an account on GitHub.

GitHub - gyopak/sidellama

> tiny browser-augmented chat client for open-source language models.

0

GitHub - saoudrizwan/claude-dev

github.com GitHub - saoudrizwan/claude-dev: Claude Dev goes beyond simple code completion by reading & writing files, creating projects, and executing terminal commands with your permission.

Claude Dev goes beyond simple code completion by reading & writing files, creating projects, and executing terminal commands with your permission. - saoudrizwan/claude-dev

GitHub - saoudrizwan/claude-dev: Claude Dev goes beyond simple code completion by reading & writing files, creating projects, and executing terminal commands with your permission.

> Claude Dev goes beyond simple code completion by reading & writing files, creating projects, and executing terminal commands with your permission.

0

GitHub - YofarDev/yofardev_ai

github.com GitHub - YofarDev/yofardev_ai

Contribute to YofarDev/yofardev_ai development by creating an account on GitHub.

GitHub - YofarDev/yofardev_ai

> Yofardev AI is a small fun project to kind of bring life to a Large Language Model (LLM) through an animated avatar. Users can interact with the AI assistant through text (or dictate to text), and the app responds with generated text2speech, and lip-synced animations.

0

GitHub - severian42/Mycomind-Daemon-Ollama-Mixture-of-Memory-RAG-Agents

> Mycomind Daemon: A mycelium-inspired, advanced Mixture-of-Memory-RAG-Agents (MoMRA) cognitive assistant that combines multiple AI models with memory, RAG and Web Search for enhanced context retention and task management.

0

Msty - Using AI Models made Simple and Easy

msty.app Msty - Using AI Models made Simple and Easy

Chat with files, understand images, and access various AI models offline. Use models from Open AI, Claude, Perplexity, Ollama, and HuggingFace in a unified interface.

Msty - Using AI Models made Simple and Easy

> Chat with any AI model in a single-click. No prior model setup experience needed.

1

GitHub - fudan-generative-vision/hallo: Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation

github.com GitHub - fudan-generative-vision/hallo: Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation

Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation - fudan-generative-vision/hallo

GitHub - fudan-generative-vision/hallo: Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation

Researchers have made significant strides in creating lifelike animated portraits that respond to spoken words. To achieve this, they've developed a novel approach that ensures facial movements, lip sync, and pose changes are meticulously coordinated and visually stunning. By ditching traditional methods that rely on intermediate facial representations, this innovative technique uses an end-to-end diffusion paradigm to generate precise and realistic animations. The proposed system integrates multiple AI components, including generative models, denoisers, and temporal alignment techniques, allowing for adaptive control over expression and pose diversity. This means that the animated portraits can be tailored to individual identities, making them more relatable and engaging. The results show significant improvements in image and video quality, lip synchronization, and motion diversity. This breakthrough has exciting implications for AI companionship, enabling the creation of more realistic and personalized digital companions that can interact with humans in a more natural and empathetic way.

by Llama 3 70B

0

PowerInfer-2: Fast Large Language Model Inference on a Smartphone

> Today, we’re excited to introduce PowerInfer-2, our highly optimized inference framework designed specifically for smartphones. PowerInfer-2 supports up to Mixtral 47B MoE models, achieving an impressive speed of 11.68 tokens per second, which is up to 22 times faster than other state-of-the-art frameworks. Even with 7B models, by placing just 50% of the FFN weights on the phones, PowerInfer-2 still maintains state-of-the-art speed!

0

NVIDIA ACE

developer.nvidia.com NVIDIA ACE

Build and deploy game characters and interactive avatars at scale.

NVIDIA ACE

> NVIDIA ACE is a suite of technologies for bringing digital humans, AI non-player characters (NPCs), and interactive avatars to life with generative AI.

1

GitHub - mustafaaljadery/llama3v: A SOTA vision model built on top of llama3 8B.

github.com GitHub - mustafaaljadery/llama3v: A SOTA vision model built on top of llama3 8B.

A SOTA vision model built on top of llama3 8B. . Contribute to mustafaaljadery/llama3v development by creating an account on GitHub.

GitHub - mustafaaljadery/llama3v: A SOTA vision model built on top of llama3 8B.
0

The Varying Levels of Getting Started with “Uncensored” LLM-Powered Chatbots (2024 Update)

0

Llama3 70B Successfully Deployed on a Single 4GB GPU

The open-source language model Llama3 has been released, and it has been confirmed that it can be run locally on a single GPU with only 4GB of VRAM using the AirLLM framework. Llama3's performance is comparable to GPT-4 and Claude3 Opus, and its success is attributed to its massive increase in training data and technical improvements in training methods. The model's architecture remains unchanged, but its training data has increased from 2T to 15T, with a focus on quality filtering and deduplication. The development of Llama3 highlights the importance of data quality and the role of open-source culture in AI development, and raises questions about the future of open-source models versus closed-source ones in the field of AI.

Summarized by Llama 3 70B Instruct

4

AI CloudFlare: Access different LLM models over API

0

Turn your computer into an AI computer - Jan

jan.ai Turn your computer into an AI computer - Jan

Run LLMs like Mistral or Llama2 locally and offline on your computer, or connect to remote AI APIs like OpenAI’s GPT-4 or Groq.

Turn your computer into an AI computer - Jan
1

GitHub - cohere-ai/cohere-toolkit: Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.

github.com GitHub - cohere-ai/cohere-toolkit: Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.

Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications. - cohere-ai/cohere-toolkit

GitHub - cohere-ai/cohere-toolkit: Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.
0

GitHub - McGill-NLP/webllama: Llama-3 agents that can browse the web by following instructions and talking to you

github.com GitHub - McGill-NLP/webllama: Llama-3 agents that can browse the web by following instructions and talking to you

Llama-3 agents that can browse the web by following instructions and talking to you - McGill-NLP/webllama

GitHub - McGill-NLP/webllama: Llama-3 agents that can browse the web by following instructions and talking to you
0

Building RAG with LLama3 Locally

0

Mergoo: Efficiently Merge, then Fine-tune (MoE, Mixture of Adapters)

0

Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"

github.com GitHub - astramind-ai/Mixture-of-depths: Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"

Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models" - astramind-ai/Mixture-of-depths

GitHub - astramind-ai/Mixture-of-depths: Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"
0