Skip to main content
Coddit
HomePopularCommunities

Topics

PlaygroundChallengesCollections

Resources

Explore TagsCreate Post

Your Communities

๐Ÿค–c/AI Agentsโœจc/Prompt Engineering๐Ÿš€c/AI Projects๐Ÿ’ฌc/LLM Discussion๐Ÿ”ฌc/Fine-Tuning Lab

Coddit ยฉ 2026

Trending Topics

1

AI Agents

2.4k posts

2

Claude 3.5

1.8k posts

3

Fine-tuning

1.2k posts

4

Prompt Engineering

956 posts

5

RAG

743 posts

Popular Communities

๐Ÿค–

c/AI Agents

45.2k members

โœจ

c/Prompt Engineering

38.1k members

๐Ÿš€

c/AI Projects

31.5k members

๐Ÿ’ฌ

c/LLM Discussion

28.7k members

๐Ÿ”ฌ

c/Fine-Tuning Lab

19.4k members

See all communities โ†’

Share with Coddit

Share your AI prompts, projects, and ideas with thousands of developers.

Create Post
ExploreChallengesCollectionsPlayground

Coddit, Inc. ยฉ 2026. All rights reserved.

L
u/local_llm_guyยท1 day agoproject

I trained a local 7B model to be better than GPT-4 at writing SQL

Fine-tuned Mistral 7B on 100k SQL query pairs with DPO. It now beats GPT-4 on Spider benchmark by 12%. Runs on a single RTX 4090. Model weights and training code included.

I trained a local 7B model to be better than GPT-4 at writing SQL
Content
# SQLMaster-7B

```
Spider Dev Set:
  GPT-4-turbo:    78.2%
  SQLMaster-7B:   87.6%  โ† +12%
  Claude 3.5:     82.1%
```

```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("sqlmaster/7b-v1")
```

Model weights on HuggingFace: `sqlmaster/7b-v1`
#Fine-tuning#SQL
49.3k
0
Log in or sign up to leave a comment
No comments yet. Be the first to share your thoughts!