The Magnum Experience
Why is it so smooth?
This page showcases our ideology behind building Magnum and delivering the perfect User Experience.
There is a simple answer to that; Magnum was made to EMPOWER the community; and not as a product to generate revenue. As student developers, we have always appreciated Open Source and community efforts.
Then came the launch of OpenAI's GPT-3. It literally took the world by a storm and there was so much hype around it. We decided to integrate it into Magnum (currently, this is called Magnum- Full) to make the UX even better. Instead of TeX files, you can generate smooth animations straight from plain text now!
In some cases, it might happen that your PC doesn't have enough space/ computing power to run the heavy workload of generating 1440p 60 fps animations. This might lead to really long and annoying processing times and even to system breakdown. To avoid all of this, and to greatly speed up AND smoothen the process, we made Magnum Lite.
The full 15 GB installation of Manim and LaTeX and Magnum is done on a reusable, shared cloud machine. It has a lot of free space, high RAM, special GPUs for processing videos and more- just so that you don't have to worry about many small and big things.
We did this to simplify interfacing with LaTeX. The normal LaTeX code looks too bewildering for humans to read. It just has too many redundant stuff that we wished to be fixed even when we use it today. Like line breaks and all- can't it be just more intuitive? That's why we fixed it in Magnum.
Users say that the Magnum LaTeX format is more readable just by looking at the code and is very much more portable. Plus, we ask you to bring it as a .txt file, NOT as a .tex. This is because many people use online IDEs for LaTeX coding and it is very difficult to retrieve and transfer .tex files from there. Hence, we removed that hassle too!