Transformers
search
⌘Ctrlk
Transformers
  • 🌍GET STARTED
  • 🌍TUTORIALS
  • 🌍TASK GUIDES
  • 🌍DEVELOPER GUIDES
  • 🌍PERFORMANCE AND SCALABILITY
  • 🌍CONTRIBUTE
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Glossary
    • What BOINC AI Transformers can do
    • How BOINC AI Transformers solve tasks
    • The Transformer model family
    • Summary of the tokenizers
    • Attention mechanisms
    • Padding and truncation
    • BERTology
    • Perplexity of fixed-length models
    • Pipelines for webserver inference
    • Model training anatomy
  • 🌍API
  • 🌍INTERNAL HELPERS
gitbookPowered by GitBook
block-quoteOn this pagechevron-down

🌍CONCEPTUAL GUIDES

Philosophychevron-rightGlossarychevron-rightWhat BOINC AI Transformers can dochevron-rightHow BOINC AI Transformers solve taskschevron-rightThe Transformer model familychevron-rightSummary of the tokenizerschevron-rightAttention mechanismschevron-rightPadding and truncationchevron-rightBERTologychevron-rightPerplexity of fixed-length modelschevron-rightPipelines for webserver inferencechevron-rightModel training anatomychevron-right
PreviousChecks on a Pull Requestchevron-leftNextPhilosophychevron-right