Skip to main content
Advanced Search
Search Terms
Content Type

Exact Matches
Tag Searches
Date Options
Updated after
Updated before
Created after
Created before

Search Results

7 total results found

Poolnoodle-BigNoodle

AI Models

Poolnoodle-BigNoodle is our biggest standard transformer model, featuring about 75 billion parameters. It's a general purpose model, trained in many languages and many generic capabilities. Poolnoodle-BigNoodle's strong points are general reasoning and a deep...

Poolnoodle-CodeNoodle

AI Models

Poolnoodle-CodeNoodle is a Large Language Model derived from Code Llama, but has been trained and fine-tuned by ScaiLabs. It's based on the Llama 2 13B model and has about 13 billion parameters. With a stretched context length of 16000 tokens, Poolnoodle-Code...

Poolnoodle-LongNoodle

AI Models

Poolnoodle-LongNoodle is a colleague model of Poolnoodle-BigNoodle, but with far less parameters. It has been trained on mostly the same datasets though. The difference between LongNoodle and BigNoodle is that LongNoodle supports more than 128K tokens in its c...

Poolnoodle-FixerNoodle

AI Models

Poolnoodle-FixerNoodle is an LLM trained to route requests between LLMs and plugins. It's a model13 billion model, based on the original Poolnoodle model, which has its roots in the Vicuna 13B model. The model has been trained to interact with several so call...

Poolnoodle-ToolNoodle

AI Models

Poolnoodle-ToolNoodle is a model featuring 13 Billion parameters that has been trained to interact with APIs. It has been trained to understand both API documentation, but also API definitions like OpenAPI/Swagger, RAML and SOAP. Poolnoodle-ToolNoodle isn't o...

Poolnoodle-BabyNoodle

AI Models

Poolnoodle-BabyNoodle is our smallest LLM model, optimized to be able to run on embedded systems. This model has been built from the ground up by ScaiLabs and has been trained on a set of basic capabilities. The primary language the model supports is English,...

Poolnoodle-Mixup

AI Models

Poolnoodle-Mixup is our new main-contender for a big model with deep reasoning capabilities. It's a sparse mixture of experts model based on Mixtral 8x7, but with updated tokenizations and embeddings. Poolnoodle-Mixup strong points are general reasoning and...