Transformers
search
Ctrlk
  • 🌍GET STARTEDchevron-right
  • 🌍TUTORIALSchevron-right
  • 🌍TASK GUIDESchevron-right
  • 🌍DEVELOPER GUIDESchevron-right
  • 🌍PERFORMANCE AND SCALABILITYchevron-right
  • 🌍CONTRIBUTEchevron-right
  • 🌍CONCEPTUAL GUIDESchevron-right
    • Philosophy
    • Glossary
    • What BOINC AI Transformers can do
    • How BOINC AI Transformers solve tasks
    • The Transformer model family
    • Summary of the tokenizers
    • Attention mechanisms
    • Padding and truncation
    • BERTology
    • Perplexity of fixed-length models
    • Pipelines for webserver inference
    • Model training anatomy
  • 🌍APIchevron-right
  • 🌍INTERNAL HELPERSchevron-right
gitbookPowered by GitBook
block-quoteOn this pagechevron-down

🌍CONCEPTUAL GUIDES

Philosophychevron-rightGlossarychevron-rightWhat BOINC AI Transformers can dochevron-rightHow BOINC AI Transformers solve taskschevron-rightThe Transformer model familychevron-rightSummary of the tokenizerschevron-rightAttention mechanismschevron-rightPadding and truncationchevron-rightBERTologychevron-rightPerplexity of fixed-length modelschevron-rightPipelines for webserver inferencechevron-rightModel training anatomychevron-right
PreviousChecks on a Pull Requestchevron-leftNextPhilosophychevron-right