Microsoft, in collaboration with OpenAI, has introduced Orca, a powerful AI model designed to mimic and acquire knowledge from expansive language models like GPT-4. With a capacity of 13 billion parameters, Orca aims to overcome the limitations of smaller models by emulating the reasoning processes of larger foundational models.
Orca makes use of GPT-4’s capabilities to fine-tune and adjust its own performance for specific applications. Its lower size allows for more efficient operation with fewer computing resources, allowing researchers to optimise the model for their individual needs without relying on a large-scale data centre.
According to Microsoft’s research report, Orca has the ability to imitate and learn from vast language models. Orca can learn sophisticated mental processes, explanations, and intricate instructions step by step with the help of GPT-4. It is based on the principles of Vicuna, another AI model.
Microsoft harnesses Orca’s power to advance progressive learning by leveraging extensive imitation data. Notably, Orca has already achieved a significant milestone, surpassing Vicuna’s performance by 100 per cent on challenging zero-shot reasoning benchmarks such as Big-Bench Hard (BBH). Additionally, the model exhibits a remarkable 42 per cent increase in speed compared to conventional AI models when evaluated on AGIEval.
On benchmarks like as BBH, Orca shows comparable reasoning abilities to ChatGPT despite its lower size. It also scores well on standardised academic tests such as the SAT, LSAT, GRE, and GMAT, though not at the same level as GPT-4. The Microsoft research team emphasises Orca’s ability to learn from human experts’ step-by-step explanations and advanced language models. Orca is anticipated to improve its skills and capacities over time, putting it as a potential competitor to GPT-4.