ディスカッション (8件)
MicroGPTの仕組みをインタラクティブに学べる解説コンテンツが公開されました。最小限の構成でGPTの内部動作を視覚的・体験的に理解できる内容となっています。エンジニアなら一度は触っておきたい、GPTのコア原理を学べる素晴らしいリソースです。
By the end of training, the model produces names like "kamon", "karai", "anna", and "anton". None of them are copies from the dataset.
Hey, I am able to see kamon, karai, anna, and anton in the dataset, it'd be worth using some other names: https://raw.githubusercontent.com/karpathy/makemore/988aa59/... (https://raw.githubusercontent.com/karpathy/makemore/988aa59/names.txt)
The part that eludes me is how you get from this to the capability to debug arbitrary coding problems. How does statistical inference become reasoning?
For a long time, it seemed the answer was it doesn't. But now, using Claude code daily, it seems it does.
I read through this entire article. There was some value in it, but I found it to be very "draw the rest of the owl". It read like introductions to conceptual elements or even proper segues had been edited out. That said, I appreciated the interactive components.
The original article from Karpathy:
https://karpathy.github.io/2026/02/12/microgpt/ (https://karpathy.github.io/2026/02/12/microgpt/)
It says its tailored for beginners, but I don't know what kind of beginner can parse multiple paragraphs like this:
"How wrong was the prediction? We need a single number that captures "the model thought the correct answer was unlikely." If the model assigns probability 0.9 to the correct next token, the loss is low (0.1). If it assigns probability 0.01, the loss is high (4.6). The formula is
−
log
(
�
)
−log(p) where
�
p is the probability the model assigned to the correct token. This is called cross-entropy loss."
Is it becoming a thing to misspell and add grammatical mistakes on purpose to show that an LLM didn't write the blog post? I noticed several spelling mistakes in Karpathy's blog post that this article is based on and in this article.