Machine Learning - Learning/Language Models
-
“Large Language Models (in 2023)” (Talk by Hyung Won Chung, OpenAI, at Seoul National University)
YouTube Video
Click to view this content.
- www.zdnet.com Is AI lying to us? These researchers built an LLM lie detector of sorts to find out
When their output is false, large language models can be made to disclose the truth. Here's how.
-
Comparing Llama-2 and GPT-3 LLMs for HPC kernels generation
Abstract
We evaluate the use of the open-source Llama-2 model for generating well-known, high-performance computing kernels (e.g., AXPY, GEMV, GEMM) on different parallel programming models and languages (e.g., C++: OpenMP, OpenMP Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python: numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA.jl, AMDGPU.jl). We built upon our previous work that is based on the OpenAI Codex, which is a descendant of GPT-3, to generate similar kernels with simple prompts via GitHub Copilot. Our goal is to compare the accuracy of Llama-2 and our original GPT-3 baseline by using a similar metric. Llama-2 has a simplified model that shows competitive or even superior accuracy. We also report on the differences between these foundational large language models as generative AI continues to redefine human-computer interactions. Overall, Copilot generates codes that are more reliable but less optimized, whereas codes generated by Llama-2 are less reliable but more optimized when correct.
-
The Secret Ingredient of ChatGPT Is Human Advice
Original (pay-walled): https://www.nytimes.com/2023/09/25/technology/chatgpt-rlhf-human-tutors.html
- openai.com DALL·E 3
DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.
-
Efficient Fine-Tuning for Llama-v2-7b on a Single GPU
YouTube Video
Click to view this content.
- refact.ai Introducing Refact Code LLM: 1.6B State-of-the-Art LLM for Code that Reaches 32% HumanEval
Today we're introducing Refact LLM: 1.6B code model with infill real-time code completion (including fill-in-the-middle(FIM) capability) and chat.
-
Meta Is Developing a New, More Powerful AI System as Technology Race Escalates
Original (pay-walled): https://www.wsj.com/tech/ai/meta-is-developing-a-new-more-powerful-ai-system-as-technology-race-escalates-decf9451
- ai.meta.com Introducing Code Llama, a state-of-the-art large language model for coding
Code Llama, which is built on top of Llama 2, is free for research and commercial use.
-
GPT-4 Can’t Reason
Corresponding arXiv preprint: https://arxiv.org/abs/2308.03762
- www.theinformation.com Meta’s Next AI Attack on OpenAI: Free Code-Generating Software
Meta Platforms is preparing to launch software to help developers automatically generate programming code, a challenge to proprietary software from OpenAI, Google and others, according to two people with direct knowledge of the product. Meta’s code-generating artificial intelligence model, ...
- futurism.com Google’s Search AI Is Absolutely Horrible at Geography
Google's AI-powered search doesn't understand geography. Or, apparently, the alphabet. And definitely not both at the same time.
- www.theregister.com ChatGPT gets code questions wrong 52% of the time
But its suggestions are so annoyingly plausible
- sites.research.google Med-PaLM
Med-PaLM is a large language model from Google Research that has been adapted for medical purposes.
Med-PaLM is a large language model (LLM) designed to provide high quality answers to medical questions.
Med-PaLM harnesses the power of Google’s large language models, which we have aligned to the medical domain and evaluated using medical exams, medical research, and consumer queries. Our first version of Med-PaLM, preprinted in late 2022 and published in Nature in July 2023, was the first AI system to surpass the pass mark on US Medical License Exam (USMLE) style questions. Med-PaLM also generates accurate, helpful long-form answers to consumer health questions, as judged by panels of physicians and users.
We introduced our latest model, Med-PaLM 2, at Google Health’s annual health event The Check Up, in March, 2023. Med-PaLM 2 achieves an accuracy of 86.5% on USMLE-style questions, a 19% leap over our own state of the art results from Med-PaLM. According to physicians, the model's long-form answers to consumer medical questions improved substantially. In the coming months, Med-PaLM 2 will also be made available to a select group of Google Cloud customers for limited testing, to explore use cases and share feedback, as we investigate safe, responsible, and meaningful ways to use this technology.
- huggingface.co NousResearch/Nous-Hermes-Llama2-13b · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Nous-Hermes-Llama2-13b is currently the highest ranked 13B LLaMA finetune on the Open LLM Leaderboard.
Model Description
Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
Announcements
- https://twitter.com/NousResearch/status/1682458324804009987
- https://twitter.com/Teknium1/status/1682459395853279232
-
InvokeAI 3.0 released
YouTube Video
Click to view this content.
cross-posted from: https://lemmy.world/post/1954892
> It's looking really good! Major features include controlnet, support for SDXL, and a whole bunch of other cool things. > > Download: https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0
- huggingface.co georgesung/llama2_7b_chat_uncensored · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
-
Llama 2 -Meta AI
ai.meta.com Llama 2 - Meta AILlama 2 — The next generation of our open source large language model, available for free for research and commercial use.
-
Microsoft LongNet: One BILLION Tokens LLM — David Shapiro ~ AI (06.07.2023)
YouTube Video
Click to view this content.
cross-posted from: https://lemmy.fmhy.ml/post/649641
> We could have AI models in a couple years that hold the entire internet in their context window.
- github.com GitHub - wgryc/phasellm: Large language model evaluation and workflow framework from Phase AI.
Large language model evaluation and workflow framework from Phase AI. - GitHub - wgryc/phasellm: Large language model evaluation and workflow framework from Phase AI.
Docs: https://phasellm.com/docs/phasellm/eval.html
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
Features:
- 200+ tasks implemented. See the task-table for a complete list.
- Support for models loaded via transformers (including quantization via AutoGPTQ), - GPT-NeoX, and Megatron-DeepSpeed, with a flexible tokenization-agnostic interface.
- Support for commercial APIs including OpenAI, goose.ai, and TextSynth.
- Support for evaluation on adapters (e.g. LoRa) supported in HuggingFace's PEFT library.
- Evaluating with publicly available prompts ensures reproducibility and comparability between papers.
- Task versioning to ensure reproducibility when tasks are updated.
- huggingface.co NousResearch/Redmond-Hermes-Coder · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Model Description
Redmond-Hermes-Coder 15B is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This model was trained with a WizardCoder base, which itself uses a StarCoder base model.
The model is truly great at code, but, it does come with a tradeoff though. While far better at code than the original Nous-Hermes built on Llama, it is worse than WizardCoder at pure code benchmarks, like HumanEval.
It comes in at 39% on HumanEval, with WizardCoder at 57%. This is a preliminary experiment, and we are exploring improvements now.
However, it does seem better at non-code than WizardCoder on a variety of things, including writing tasks.
Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions.
Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' (v1) GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.
-
OpenChat_8192 - The first model to beat 100% of ChatGPT-3.5
Models
Datasets
Repos
Related Papers
Credit:
Archive:
@Yampeleg The first model to beat 100% of ChatGPT-3.5 Available on Huggingface
🔥 OpenChat_8192
🔥 105.7% of ChatGPT (Vicuna GPT-4 Benchmark)
Less than a month ago the world witnessed as ORCA [1] became the first model to ever outpace ChatGPT on Vicuna's benchmark.
Today, the race to replicate these results open-source comes to an end.
Minutes ago OpenChat scored 105.7% of ChatGPT.
But wait! There is more!
Not only OpenChat beated Vicuna's benchmark, it did so pulling off a LIMA [2] move!
Training was done using 6K GPT-4 conversations out of the ~90K ShareGPT conversations.
The model comes in three versions: the basic OpenChat model, OpenChat-8192 and OpenCoderPlus (Code generation: 102.5% ChatGPT)
This is a significant achievement considering that it's the first (released) open-source model to surpass the Vicuna benchmark. 🎉🎉
-
OpenChat: https://huggingface.co/openchat/openchat
-
OpenChat_8192: https://huggingface.co/openchat/openchat_8192 (best chat)
-
OpenCoderPlus: https://huggingface.co/openchat/opencoderplus (best coder)
-
Dataset: https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset
-
Code: https://github.com/imoneoi/openchat
Congratulations to the authors!!
---
[1] - Orca: The first model to cross 100% of ChatGPT: https://arxiv.org/pdf/2306.02707.pdf [2] - LIMA: Less Is More for Alignment - TL;DR: Using small number of VERY high quality samples (1000 in the paper) can be as powerful as much larger datasets: https://arxiv.org/pdf/2305.11206
-
-
Model Catalog
https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W_jMwYpb63e1ZR3aePczz3zlbJW-Y4/edit?usp=sharing