Magic has announced advancements in ultra-long context models, particularly with their Long-Term Memory (LTM) models that can process up to 100 million tokens during inference, which enhances code synthesis by incorporating comprehensive code, documentation, and libraries into context. Their new evaluation method, HashHop, addresses flaws in current long context evaluations by using random hash pairs, which require models to manage and retrieve large amounts of information effectively.
Magic has trained its first 100M token context model, LTM-2-mini, which is considerably more efficient than other models like Llama 3.1, requiring significantly less computational power. Although early results on real tasks show room for improvement, the model has demonstrated capabilities by successfully implementing functions in complex codebases without human intervention.
The company is expanding its supercomputing capabilities with collaboration from Google Cloud and has raised $465 million in funding. Magic is focused on achieving efficient inference-time compute, aiming to streamline the development process so that even complex features can be implemented quickly and easily. They are actively hiring to advance their research and development efforts in AI and cybersecurity[1].
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: