PRODUCT LAUNCH


Since 2019, AICA has been focused on developing intuitive robotics software that enables engineers to automate complex robotics tasks.
On October 10th, we reach a major milestone: the first public release of the AICA System!

About AICA System & Studio

Interactive application builder

AICA Studio lets you build advanced robotic applications more easily than ever with our unique drag and drop graph editor

Real-time control and monitoring

Connect to industry standard robot arms with advanced position, velocity and force controllers

Motion generation and reinforcement learning

Access a growing library of smart components, or extend the functionality with our open SDK

Why choose AICA System?

As an innovative software startup, AICA is developing new ways to control robots to make them more generalizable and adaptable to different tasks. The ability to deploy real-time, sensor-driven control loops on robot platforms is becoming increasingly important. With a focus on modern, modular software development and a strong foundation in machine learning and optimization techniques, you can explore new ways to advance your projects in this field.

  • By using modern and modular programming languages, free from the limitations of native robot control languages, AICA software is also uniquely ready to harness and deploy advanced machine learning and optimization techniques on real robot hardware. This has been demonstrated in several existing applications through the use of task optimization, learning from demonstration, and more recently deep learning in simulation.

    With task optimization, the parameters of AICA components or controllers can be iteratively adjusted and optimized from measured feedback based on a certain reward function. For example, a force-compliant assembly or insertion motion could be parameterized to move faster or more or less forcefully, based on a reward function that values speed and accuracy and penalizes high forces. The task optimization approach can help to reduce the amount of development time spent manually fine-tuning some control configuration, and can sometimes find more performant configurations than by hand.

    AICA is also the right partner for training and deploying the new generation of large models for robot perception and motion planning, with experience training complete behaviors and control policies in simulation using Issac Sim and NVIDIA Omniverse and then running them live on a real robot. This capability in particular would be an interesting avenue to exploit in a future continuation of the project.

  • Real-time control loops allow robot behavior to be influenced by environmental feedback as provided by force or vision sensors, and also allow control decisions to be made and adapted at runtime. This stands in contrast to traditional robot programming, where precise robot motions are predetermined and executed on schedule. While the traditional approach works well for highly controlled mass production environments and can function with high speed and reliability, it cannot automatically adapt to uncertainties or unstructured tasks. Using vision or force feedback for robot control is also possible with existing market solutions, but those are normally built-for-purpose for specific common applications, such as palletizing, bin picking, welding or press-fitting. AICA prides itself on an open and extensible control framework that can apply many types of control algorithms on a range of hardware brands.

  • For this PoC, the Universal Robot cobot was suggested because of convenient safety and collaboration features and previously proven suitability for similar applications. While these robots provide a capable and intuitive programming interface natively, it is difficult to program scalable and complex interactions. For example, using the CAD geometry of the reactor as reference to localize and calibrate the screw holes, and subsequently plan the handling motions in the new reference frames, would be difficult to achieve using the native programming on the robot directly. Similarly, generating and parameterizing the various dynamic behaviors for searching for the alignment of the screw and screw hole, or the screwdriver and nut, and executing those behaviors in a force-compliant control mode, would require a lot of complicated and specific scripts on the robot. Not only does this need specialized knowledge of the proprietary robot language, but also makes it difficult to switch to another brand without needing to reprogram everything (for example, because payload or IP rating requirements change, or because a 7th joint is needed for the robot to be flexible and dextrous enough to reach the difficult locations).

    Overall, the use of an external controller not only allows for more advanced sensor-driven control, but also makes the software development process more agile, scalable and reusable. Any custom functionality for the project is written in C++ or Python modules that can then be reused in other applications and even with different robots.

Brand new features in AICA Studio v4.0

Live visualization

Monitor robot and sensor state data directly in the browser

Manage application events

A new transition event feature allows precise management of lifecycle states and error handling for components, controllers and hardware

Advanced application logic

Use the new sequence and condition blocks to define exactly where and when application events should occur

How do we support our users?

Documentation

Explore AICA technical documentation as a helpful reference as you build

AICAdemy platform

AICAdemy provides a suite of educational content and support resources to empower you to build independently with AICA System

Recurring webinars

Participate in live webinars on getting up and running, building custom components, and more

AICA Community

Engage in the AICA Community to get to know other AICA users, ask and answer questions

Software that simplifies robot integration & programming across diverse hardware, to build more capable and flexible systems

Dynamic motion. Force control. Reinforcement learning.