Spell machine learning platform goes on-prem
Spell, an end-to-end platform for machine learning and deep learning—covering data prep, training, deployment, and management—has announced Spell for Private Machines, a new version of its system that can be deployed on your own hardware as well as on cloud resources.
Spell was founded by Serkan Piantino, former director of engineering at Facebook and founder of Facebook’s AI Research group. Spell allows teams to create reproducible machine learning systems that incorporate familiar tools such as Jupyter notebooks and that leverage cloud-hosted GPU compute instances.
Spell emphasizes ease of use. For example, hyperparameter optimization for an experiment is a high-level, one-command function. Nor must users do much to configure the infrastructure; Spell detects what hardware is available and orchestrates to suit. Spell also organizes experiment assets, so both experiments and their data can be versioned and check-pointed as part of the development process.
Spell originally ran only in the cloud; there’s been no “behind-the-firewall” deployment until now. Spell For Private Machines allows developers to run the platform on their own hardware. Both on-prem and cloud resources can be mixed and matched as needed. For instance, a prototype version of a project could be created on local hardware, then scaled out to an AWS instance for production deployment.
Much of Spell’s workflow is already designed to feel as if it runs locally, and to complement existing workflows. Python tools for Spell work can be set up with
pip install spell, for example. And because the Spell runtime uses containers, multiple versions of an experiment with different hyperparameter turnings can be run side by side.