atlas news
    
Neptune Ai
26  décembre     11h00
How to Build and Evaluate a RAG System Using LangChain, Ragas, and neptune.ai
Isaac Chung    Imagine asking a chat assistant about LLMOps only to receive outdated advice or irrelevant best practices. While LLMs are powerful, they rely solely on their pre trained knowledge and lack the ability to fetch current data. This is where Retrieval Augmented Generation RAG comes in. RAG combines...
19  décembre     11h00
Position: Understanding LLMs Requires More Than Statistical Generalization [Paper Reflection]
Patrik Reizinger    In our paper, Understanding LLMs Requires More Than Statistical Generalization, we argue that current machine learning theory cannot explain the interesting emergent properties of Large Language Models, such as reasoning or in context learning. From prior work e.g., Liu et al., and our...
12  décembre     11h00
From Research to Production: Building The Most Scalable Experiment Tracker For Foundation Models
Aurimas Griciunas    Scaling large language model LLM operations is a challenge that many of us are facing right now. For those navigating similar waters, I recently shared some thoughts about it on the Data Exchange Podcast based on our journey at neptune.ai over the last few years. Six years ago, we were mainly...
05  décembre     11h30
Transformers Key-Value Caching Explained
MichaÅ‚ Oleszak    The transformer architecture is arguably one of the most impactful innovations in modern deep learning. Proposed in the famous paper Attention Is All You Need, it has become the go to approach for most language related modeling, including all Large Language Models LLMs , such as the GPT...
28  novembre     11h30
Learn From Failure: Fine-Tuning LLMs With Trial-and-Error Data For Intuitionistic Propositional Logic Proving [Paper Reflection]
Chenyang An    With the rapid advancements in large language models LLMs , transformer based architectures are increasingly employed as tactical generators and premise selectors in automated theorem proving systems, generating candidate proof steps or selecting useful premises based on the unfinished proof goal....
21  novembre     11h00
Fine-Tuning Llama 3 with LoRA: Step-by-Step Guide
Boris Martirosyan    Llama is a family of large language models LLMs developed by Meta. These models have demonstrated exceptional performance on benchmarks for language modeling, general question answering, code generation, and mathematical reasoning, surpassing recently introduced models such as Google’s Gemini ...
14  novembre     15h25
How to Run LLMs Locally
Gabriel Gonçalves    While LLM APIs offer quick access to powerful large language models, they’re not always the best option whether due to cost, privacy concerns, or the need for customization. In many cases, running a model locally seems more appealing or even inevitable in meeting requirements. However, this...
08  octobre     08h10
Scale And Track Your AI ML Workflows: neptune.ai Flyte & Union Integration
Patrycja Jenkner    In the machine learning ML and artificial intelligence AI domain, managing, tracking, and visualizing model training processes is a significant challenge due to the scale and complexity of managed data, models, and resources. Union, an optimized and more performant version of the open source...
26  septembre     11h00
LLM Hallucinations 101: Why Do They Appear? Can We Avoid Them?
Aitor Mira Abad    In , when GPT . was introduced with ChatGPT, many, like me, started experimenting with various use cases. A friend asked me if it could read an article, summarize it, and answer some questions, like a research assistant. At that time, ChatGPT had no tools to explore websites, but I was...
19  septembre     11h00
LLM Guardrails: Secure and Controllable Deployment
Natalia Kuzminykh    Large Language Models LLMs are often unpredictable and hard to control in practice. For example, a model can perform well during testing but fail in production, leading to inconsistent outputs or hallucinations. This unpredictability is intrinsic to the stochastic nature of LLMs: they can produce...
12  septembre     11h00
Reinforcement Learning From Human Feedback (RLHF) For LLMs
MichaÅ‚ Oleszak    Reinforcement Learning from Human Feedback RLHF has turned out to be the key to unlocking the full potential of today’s large language models LLMs . There is arguably no better evidence for this than OpenAI’s GPT model. It was released back in , but it was only its RLHF trained version...
05  septembre     11h00
LLMs For Structured Data
Ricardo Cardoso Pereira    It is estimated that to of the data worldwide is unstructured. However, when we look for data in a specific domain or organization, we often end up finding structured data. The most likely reason is that structured data is still the de facto standard for quantitative information....