MONA
Open Menu
Articles
Cards
GitHub
Articles
Cards
GitHub
Cards
Developer-Gathered, AI-Crafted, Human-Checked.
All
rag(9)
agents(7)
security(6)
prompt engineering(4)
mcp(3)
ocr(2)
document conversion(2)
vibe coding(2)
embedding(2)
evaluation(2)
fine tuning(2)
in-context learning(2)
patterns(1)
vision language model(1)
openai(1)
vector database(1)
hallucination(1)
web3(1)
memory(1)
transformer(1)
Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning
RAG+: Enhancing Retrieval-Augmented Generation with Application-Aware Reasoning
MCPEval: Automatic MCP-based Deep Evaluation for AI Agent Models
Learning without training: The implicit dynamics of in-context learning
Retrieval-Augmented Reasoning with Lean Language Models
MCP vs CLI: Benchmarking Tools for Coding Agents
The MCP Security Survival Guide: Best Practices, Pitfalls, and Real-World Lessons
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models
Building a web search engine from scratch in two months with 3 billion neural embeddings
How to Hack a Web3 Wallet (Legally): A Full-Stack Pentesting Guide
Lost in the Middle: How Language Models Use Long Contexts
AI’s Security Crisis: Why Your Assistant Might Betray You
Lessons From Red Teaming 100 Generative AI Products
The Illusion of Progress: Re-evaluating Hallucination Detection in LLMs
LEANN: A Low-Storage Vector Index
GPT-5 prompting guide
SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion
Docling: An Efficient Open-Source Toolkit for AI-driven Document Conversion
Design Patterns for Securing LLM Agents against Prompt Injections
1 / 1 Pages