Asentus Logo

Solution for reception

The robot Receptionist is able to integrate with the system of issuing passes, make audio and video calls to the recipients of visits, as well as to advise and help people.

Asentus Logo

Solution for information-desk

Robot-consultant is able to integrate with third-party systems, services and devices. This technology allows the robot to scan apssports and automatically pre-fill documents as well as give out line/queue tickets. It also allows the robot to consult on the issues of interest and help with navigation

Asentus Logo

Solution for retail

The robot for retail can be integrated with databases and issue loyalty cards to customers. Promobot is also able to advise on products, help people with navigation, give discount coupons, and read bar codes of products to deliver information about those products.

Asentus Logo

Solution for exhibition

Robot attracts the attention of passers-by. Many people want to talk or be photographed. The attractiveness of the innovation can help the company at events and exhibitions. Robot can show promo material, lead the target dialogue and conduct a survey of visitors

Asentus Logo

Solution for education

Robot providing educational content in an engaging manner that supports social development and encourages interest in science and technology. It can talk, dance, tell stories, play games, encourage physical activity, and enable them to chat

Asentus Logo

Solution for Eldery care

Many elders are alone and lonely. They often have problems keeping track of everyday activities, such as taking their medicine. Robot supplements personal care services and provides security with alerts for many medical emergencies such as falling down.

Our Team

Meet our awesome and expert team members

Thumb

Skills & Experience

Specialising in B2B Marketing, Strategic business development, International Trade Management, Product Development and Innovations Communications.

Elena Sjödin

CEO
Thumb

Skills & Experience

Entrepreneur in the IT-field. Specialties: Full stack developer, managing it-infrastructure, Linux / BSD, knowledge and experience in information security and ISO27001

Patrik Bengtsson

CTO
Thumb

Skills & Experience

Experienced software with a demonstrated history of working in the industrial automation industry and backup and Bachelor of Science in Applied Mathematics. Skilled in Event Planning, Computer Science, C, Artificial Intelligence, and Embedded Systems.

Timur Bobylev

Tech Lead
Thumb

Skills & Experience

Brilliant computer scientist , researcher in the field of embedded systems with many publications in scientific journals, reverse engineer. GIAC GREM certification holder.

Sergey Yunakovsky

Adviser

Blog

What we think!

Latest Products Image

AI for poker: teaching to bluff. TechOpinion

Improvements in artificial intelligence can be most easily judged by it’s progress within common strategy games. Over the past two decades algorithms ability to achieve success has surpassed that of the world’s best players. Backgammon, checkers and chess have been won by algorithms facing the best human players available. The common factor between these games is that they are games where players enjoy information symmetry. Players have access to identical information about the current state of the game. Furthermore, with the exception of the opposing players next move, this information is complete. There are no unknown variables, no pieces off the table, even the options available to the opponent are known. But what about games with incomplete information and/or asymmetrical information? A complex environment, much more similar to the reality of actual decision making.

A good example of a complex environment with the features of incomplete and asymmetrical information is poker. It contains incomplete information about the distribution of the cards and asymmetric information about the strategies of the players involved. Moreover, the number of possible states of the game is enormous. For example, at the beginning of a game of Texas Hold’Em, there are 1,335,062,881,152,000 possible card states with no way of knowing which state exists. Despite the fact that poker is a gambling game, it is recognized as an official sport, and the national sports poker federation is in almost every country.

Today, this game has millions of fans around the world, but even when poker was still far from global popularity, it was appreciated not only it’s by players, but also by scientists. The pioneer of modern game theory, John von Neumann, was so fascinated by this game of bluffing and betting that he stated: "Real life consists of bluffing, of little techniques of deception, of thinking about what actions you expect from other people. That's what the game represents in my theory." John von Neumann. So with such a complex environment, the question is how best to design a program capable of winning against experienced human opponents. The history of the development of AI for poker in fact spans more than 30 years, but the most outstanding achievements have occurred in the last 3 years. Artificial intelligence makes it’s breakthrough

The first programs and algorithms for poker appeared in the 80s. For example, Mike Caro's Orac system, which was written in 1984 and first demonstrated at the Stratosphere tournament. In 1991, the world's first research group dedicated to the development of AI for poker was established at the University of Alberta (Canada). In 1997, this group demonstrated their Loki system, the first successful and meaningful realization of AI for poker. Loki played at a slightly worse level than the average human player, but this was a significant milestone for the entire research direction. In the 2000s, there was a paradigm shift in the writing of AI for poker bots. Researchers, inspired by the success of Deep Blue in chess (which successfully defeated the russian grand master Garry Kasparov in 1996), moved toward a full methodology for the formulation of questions and new approaches for the modeling of decision problems in poker. In 2015, the University of Alberta introduced its Cepheus system, which literally "solved" one type of poker — limited Heads-up poker (a simplified version containing a maximum 1018 game states).

This was a significant milestone in the development of AI, as it is the only game involving incomplete information that has a complete optimal solution. This was achieved by setting Cepheus to play with himself for two months (similarly to how AlphaGo was trained to beat the world champion at the chinese game Go). It is important to note that the system is not perfect, in the sense that it can sometimes lose chips in certain hands. However, with a sufficient number of games, Cepheus will still emerge the winner. It is also important to note that the unlimited version of Heads-up poker still does not have a similar complete solution due to the significantly higher number of game States. 2017 there were two important events in the world of poker bots. Firstly, The University of Alberta presented an algorithm for the DeepStack tournament in no-limit Heads-Up poker. Based on deep neural networks, the algorithm successfully defeated many human rivals, including professional players, similarly to how AlphaGo was able to learn to simulate human intuition by playing many games with itself over an extended period. owever the most significant event of 2017 in the world of poker bots, and possibly AI in General, occured at the Stream tournament.

The Libratus system from Carnegie Mellon University confidently defeated professional poker players — a team consisting of the world's best players in unlimited Heads-up poker. In their estimation, the algorithm was so good that it seemed as if it was cheating and as if able to see the opponent's cards. The matches were played in real time during the 20-day tournament, and the algorithms actions evaluated by the Pittsburgh supercomputer. For the time an algorithm proved capable of competing at poker at a more advanced level than it’s human opponents. pplication of artificial intellegence to real-world problems While the actual game bots are not directly applicable to real world problems, their development has meant great advances in machine learning, problem solving and decision making.

The algorithms and strategies modern poker bots use to overcome the best human players are universal are applicable to other environments with incomplete and asymmetric information. They can be ported to a variety of applications that require decision-making in a similarly complex environment, from security to marketing. For example, in a security role they could be used to interpret human decision making, based on the past behaviour and actions of the participants. In a marketing role they could be used to simulate bidding by taking on the role of the bidders. In the banking sector, too, there are many practical tasks where the algorithms behind advanced Poker bots would find application.

Their application could be extended further in high-frequency trading, where actions are made and decisions taken at a speed so great that no human operator is capable of intervening in real-time. Risk management at ten thousand trades a second, based on more than mere split second price trajectory indicators, could well be within reach.

New approaches to the underlying structures and processes have made achieving this level of decision automation a real possibility within the coming years. Approaches to artificial intelligence Classical approach One of the easiest and least time-consuming systems to implement is the expert system. This is a set of fixed IF-THEN rules, which refers the game situation to one of the predefined classes. Depending on the power of the assembled combination, the system offers to take one of the available solutions. With this approach a problem is solved by a purely mathematical method and at any time will calculate the optimal solution in terms of Nash equilibrium.

However, the decision will only be optimal if the decisions of the other participants are also optimal. The search for such a solution is resource-intensive, so in practice it can be used only along with a large number of restrictions in the rules. For example, in limit Texas hold'em for two agents or in the event of certain game situations. The Machine learning approach More effective is the operational strategy, which divides opponents into clusters and implements counter-strategy against each cluster. Most good poker players use this approach. But, unlike humans, the computer has the ability to sort out and assign a probability of a huge number of outcomes and with the correct prediction of the behavior of opponents, make the most profitable decision in terms of mathematical expectations.

To predict the behavior of opponents in this case, it is necessary to collect game statistics in past matches and implement machine learning algorithms. Unfortunately, for the authors of the algorithms, sorting out all possible outcomes of events in most game situations will not work even for powerful computers, so you need to use optimization algorithms such as Monte Carlo Tree Search. Finally, you can approach the creation of a strategy even more abstractly and implement a neural network, where at the input there exists only the parameters of the game situation, and at the output soulutions — a lot of possible solutions. The disadvantages of this approach include the fact that in this case you will need a large set of data for training. This disadvantage can be leveled by running the neural network against itself on the similarity of the AlphaGo approach, but you need to be ready for more than one day of training and modeling.

For information about more complex scientific approaches to the creation of poker bots you can read the articles of professors from the canadian University of Alberta.

Read More
Latest Products Image

The impact of AI on Business Strategy

Nowadays Artificial Intelligence is not a new concept, however Nowadays Artificial Intelligence is not a new concept, however, the pace it is integrating into a wide range of businesses deserves a detailed look. In general Artificial Intelligence, or A.I., is streamlining business processes, thereby allowing the personnel concentrate on more advanced tasks. For instance, A.I. could automate many of business processes, starting from quite simple support in handling customer service calls, to a complex analysis and processing of insurance claims. The principle behind including Artificial Intelligence into business is its leverage with technology power. Thus, the influence of human factor on a business process is reduced. With the decrease pressure on them, people would get an opportunity to express their creativity and outstanding talents in business tasks that cannot be handled by virtual assistance. Thus, the main goal behind incorporating AI in businesses is gearing the best of technologies as well as of human resources for the benefit and growth of a business. Opportunities brought by Artificial Intelligence Above all, what features make A.I. highly attractive to business owners? Benefits that Artificial Intelligence could bring into a business are numerous indeed:

•Personalization

Regardless a business` orientation, a customer is always expecting personalized experience. That is the major market`s challenge, as it demands a relevant and qualitative content on a consistent manner. And that is when Artificial Intelligence could help. In particular, A.I. would analyze customers’ expectations and make it clear for marketers. An example to illustrate this is how A.I. can make a connection between events and record the people`s reaction on these events in social media. This helps marketers to understand what is the customer`s need behind his or hers particular action. In other words, marketers can get a desired reaction from their customers, by personalizing the content and by offering a better engagement opportunity at right time and place.

•Data and business alignment

To make different business areas working together, it is important to align the data and the business, whether it is sales or marketing. Along with providing businesses with useful instincts, this alignment plays a major role in their growth. A.I. can analyze the data from different business areas, and on the basis of this diverse analysis the staff can make better decisions. For example, while analyzing and aligning the data, A.I. takes into account the business goals and the actions already taken by the marketing team. On the basis of that data, A.I can generate a proposal of relevant actions to the sales team. Thereafter, different departments of one business can react and adapt to each other’s actions. And this alignment is essential to meet consumers’ expectations

•Enhanced abilities of digital leaders

A.I has made an unrivaled impact on digital leaders` ability to make consumers take desired actions. Right from brand discovery and first emotional connection to an action, customers go through various stages. The reaction demonstrated by consumers in between these stages let digital leaders know customers` expectations and by that formulate their strategy, all in order to get an action. Artificial intelligence has enabled the marketing to target prospects at every single stage of a consumer`s journey, ensuring precision. Thus, as A.I. provides insights and measurable results for the different business aspects, it has brought a big change in the way digital leaders develop their strategy and execute the plans.

•Flexibility and mobility of business functions

Eventually, Artificial Intelligence is the result of the evolution of technologies. Certain business concepts, which may look commonly now, are also the results of this evolution. For instance, with online help-desks or an idea of a home-office, businesses are getting more flexible and independent, regardless where their employees are located and the manner they work. And as this flexibility will be enhanced with the development of the technology, it is possible that A.I. can offer to business owners and CEO`s a remote management of their staff. Logically, because management of workforce will become easier and less time-consuming, the senior staff members can focus on crucial tasks, which would be positively reflected on the business.

•Improved Digital Security

Certainly, a need for digital security increases with the advancements in the technology for underhanded purposes. In response this demands and evolution of cyber security, and in that case A.I. has the major power. Artificial Intelligence can be programmed to recognize, analyze and remember the patterns of different systems and networks. This programming will allow detecting the anomalies. That would mean that hacker attacks could be detected with both better accuracy and speed. This feature of A.I. will provide the business with better digital security.

•Increased accuracy and efficiency

One of the most important business factors is efficiency. And humans` mistakes have a big influence on that. Notably there is no solution to eliminate the human factor`s mistakes, it is however possible to include A.I. in the business processes to ensure accuracy in all business tasks. A.I. analyzes and processes the data faster and more precisely than the human staff, regardless the education skills and experience. A.I. offers the businesses an opportunity to ensure accuracy and efficiency in their processes. That is especially relevant to businesses operating with data, for they can automate a major part of their researches with the help of Artificial Intelligence.

•More employment opportunities

The further development of Artificial intelligence will also bring a rise in certain job markets, including the data miners and analysts markets. Apart from delivering a range of benefits to the businesses, AI is also bringing opportunities for professionals who are skilled enough to utilize «Big Data» for the benefit of the business. That can also affect the wage rates in a positive way. The benefits of A.I. spread beyond the above-given list, and in the end, it is he businesses that should decide how they could utilize that benefits for better profit and development. Because Artificial Intelligence, if incorporated effectively, can bring astonishing results in business growth in different industries and business areas, including IT, Healthcare, Finances, Energy and Mining, Customer supports and more. However, as A.I. will bring ongoing changes, organizations should be open to lifelong learning and ready to provide their staff with the opportunities to develop new skills and adjust to new realities.

Read More
Latest Products Image

Deep machine learning for robots Tech

This review will be useful for people who are becoming engaged in the design and implementation of physical robots and are looking for a guide to further

research, as well as for people who are interested in implementing perception functions in their robots and/or devices. The purpose of the robot will determine the kind of problems it needs to be able to solve. In an automated (machine controlled) context, the job of solving these problems is divided into two parts: The controller and the sensors. The robot must be able to perform tasks, not based on hard-coded coordinates or routine operating procedures, but by being able to interpret, assess and respond to changes in the surrounding area or workspace.
Starting with sensors, it is often necessary to use complex sensors, such as cameras or lidars and use special algorithms for processing the incoming information. Examples of the kinds of abilities required include:
Image analysis (vision)
Object detection (differentiation)
Object identification
Interpretation of object state (check)

Multiple object tracking

Latest Products Image

Segmentation

Allows you to define the content of the vision area pixel by pixel.

Evaluation of the depth

Allows you to identify obstacles and the distances betwwen them using computer vision. If the working conditions of your robot allow you to use Depth cameras with active IR illumination, such as Intel Realsense, then you can use the proprietary SDK.

Template

   # vim: expandtab:ts=4:sw=4
                                import numpy as np
                                import cv2
                                
                                
                                def non_max_suppression(boxes, max_bbox_overlap, scores=None): 
                                    """Suppress overlapping detections.
                                    Original code from [1]_ has been adapted to include confidence score.
                                    .. [1] http://www.pyimagesearch.com/2015/02/16/
                                           faster-non-maximum-suppression-python/
                                    Examples
                                    --------
                                        >>> boxes = [d.roi for d in detections]
                                        >>> scores = [d.confidence for d in detections]
                                        >>> indices = non_max_suppression(boxes, max_bbox_overlap, scores)
                                        >>> detections = [detections[i] for i in indices]
                                    Parameters
                                    ----------
                                    boxes : ndarray
                                        Array of ROIs (x, y, width, height).
                                    max_bbox_overlap : float
                                        ROIs that overlap more than this values are suppressed.
                                    scores : Optional[array_like]
                                        Detector confidence score.
                                    Returns
                                    -------
                                    List[int]
                                        Returns indices of detections that have survived non-maxima suppression.
                                    """
                                    if len(boxes) == 0:
                                        return []
                                
                                    boxes = boxes.astype(np.float)
                                    pick = []
                                
                                    x1 = boxes[:, 0]
                                    y1 = boxes[:, 1]
                                    x2 = boxes[:, 2] + boxes[:, 0]
                                    y2 = boxes[:, 3] + boxes[:, 1]
                                
                                    area = (x2 - x1 + 1) * (y2 - y1 + 1)
                                    if scores is not None:
                                        idxs = np.argsort(scores)
                                    else:
                                        idxs = np.argsort(y2)
                                
                                    while len(idxs) > 0:
                                        last = len(idxs) - 1
                                        i = idxs[last]
                                        pick.append(i)
                                
                                        xx1 = np.maximum(x1[i], x1[idxs[:last]])
                                        yy1 = np.maximum(y1[i], y1[idxs[:last]])
                                        xx2 = np.minimum(x2[i], x2[idxs[:last]])
                                        yy2 = np.minimum(y2[i], y2[idxs[:last]])
                                
                                        w = np.maximum(0, xx2 - xx1 + 1)
                                        h = np.maximum(0, yy2 - yy1 + 1)
                                
                                        overlap = (w * h) / area[idxs[:last]]
                                
                                        idxs = np.delete(
                                            idxs, np.concatenate(
                                                ([last], np.where(overlap > max_bbox_overlap)[0])))
                                
                                return pick 
Latest Products Image

Displacement and decision - making

Most physical robots, whether they are manipulators, mobile robots or something else, need to move within their operating environment to achieve their task. The robots therefore need to be able to correlate their own speed and trajectory in real time, based on the visual input they are receiving.

Latest Products Image

Orientation in space

Allows you to determine the coordinates of the robot's own location in space, including inside buildings.

make_minimum_required(VERSION 2.4.6)
                                include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
                                
                                rosbuild_init()
                                
                                IF(NOT ROS_BUILD_TYPE)
                                  SET(ROS_BUILD_TYPE Release)
                                ENDIF()
                                
                                MESSAGE("Build type: " ${ROS_BUILD_TYPE})
                                
                                set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
                                set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall  -O3 -march=native")
                                
                                # Check C++11 or C++0x support
                                include(CheckCXXCompilerFlag)
                                CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
                                CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
                                if(COMPILER_SUPPORTS_CXX11)
                                   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
                                   add_definitions(-DCOMPILEDWITHC11)
                                   message(STATUS "Using flag -std=c++11.")
                                elseif(COMPILER_SUPPORTS_CXX0X)
                                   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x")
                                   add_definitions(-DCOMPILEDWITHC0X)
                                   message(STATUS "Using flag -std=c++0x.")
                                else()
                                   message(FATAL_ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
                                endif()
                                
                                LIST(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/../../../cmake_modules)
                                
                                find_package(OpenCV 3.0 QUIET)
                                if(NOT OpenCV_FOUND)
                                   find_package(OpenCV 2.4.3 QUIET)
                                   if(NOT OpenCV_FOUND)
                                      message(FATAL_ERROR "OpenCV > 2.4.3 not found.")
                                   endif()
                                endif()
                                
                                find_package(Eigen3 3.1.0 REQUIRED)
                                find_package(Pangolin REQUIRED)
                                
                                include_directories(
                                ${PROJECT_SOURCE_DIR}
                                ${PROJECT_SOURCE_DIR}/../../../
                                ${PROJECT_SOURCE_DIR}/../../../include
                                ${Pangolin_INCLUDE_DIRS}
                                )
                                
                                set(LIBS 
                                ${OpenCV_LIBS} 
                                ${EIGEN3_LIBS}
                                ${Pangolin_LIBRARIES}
                                ${PROJECT_SOURCE_DIR}/../../../Thirdparty/DBoW2/lib/libDBoW2.so
                                ${PROJECT_SOURCE_DIR}/../../../Thirdparty/g2o/lib/libg2o.so
                                ${PROJECT_SOURCE_DIR}/../../../lib/libORB_SLAM2.so
                                )
                                
                                # Node for monocular camera
                                rosbuild_add_executable(Mono
                                src/ros_mono.cc
                                )
                                
                                target_link_libraries(Mono
                                ${LIBS}
                                )
                                
                                # Node for monocular camera (Augmented Reality Demo)
                                rosbuild_add_executable(MonoAR
                                src/AR/ros_mono_ar.cc
                                src/AR/ViewerAR.h
                                src/AR/ViewerAR.cc
                                )
                                
                                target_link_libraries(MonoAR
                                ${LIBS}
                                )
                                
                                # Node for stereo camera
                                rosbuild_add_executable(Stereo
                                src/ros_stereo.cc
                                )
                                
                                target_link_libraries(Stereo
                                ${LIBS}
                                )
                                
                                # Node for RGB-D camera
                                rosbuild_add_executable(RGBD
                                src/ros_rgbd.cc
                                )
                                
                                target_link_libraries(RGBD
                                ${LIBS}
                                )
Latest Products Image

Decision making when moving

Allows the mobile robot to make decisions about the necessary maneuvers to optimize the trajectory in a dynamic environment. The algorithm uses reinforcement learning. An example of the algorithm

Animation

import sys
                                import logging
                                import argparse
                                import configparser
                                import os
                                import shutil
                                import torch
                                import gym
                                import git
                                from crowd_sim.envs.utils.robot import Robot
                                from crowd_nav.utils.trainer import Trainer
                                from crowd_nav.utils.memory import ReplayMemory
                                from crowd_nav.utils.explorer import Explorer
                                from crowd_nav.policy.policy_factory import policy_factory
                                
                                
                                def main():
                                    parser = argparse.ArgumentParser('Parse configuration file')
                                    parser.add_argument('--env_config', type=str, default='configs/env.config')
                                    parser.add_argument('--policy', type=str, default='cadrl')
                                    parser.add_argument('--policy_config', type=str, default='configs/policy.config')
                                    parser.add_argument('--train_config', type=str, default='configs/train.config')
                                    parser.add_argument('--output_dir', type=str, default='data/output')
                                    parser.add_argument('--weights', type=str)
                                    parser.add_argument('--resume', default=False, action='store_true')
                                    parser.add_argument('--gpu', default=False, action='store_true')
                                    parser.add_argument('--debug', default=False, action='store_true')
                                    args = parser.parse_args()
                                
                                    # configure paths
                                    make_new_dir = True
                                    if os.path.exists(args.output_dir):
                                        key = input('Output directory already exists! Overwrite the folder? (y/n)')
                                        if key == 'y' and not args.resume:
                                            shutil.rmtree(args.output_dir)
                                        else:
                                            make_new_dir = False
                                            args.env_config = os.path.join(args.output_dir, os.path.basename(args.env_config))
                                            args.policy_config = os.path.join(args.output_dir, os.path.basename(args.policy_config))
                                            args.train_config = os.path.join(args.output_dir, os.path.basename(args.train_config))
                                    if make_new_dir:
                                        os.makedirs(args.output_dir)
                                        shutil.copy(args.env_config, args.output_dir)
                                        shutil.copy(args.policy_config, args.output_dir)
                                        shutil.copy(args.train_config, args.output_dir)
                                    log_file = os.path.join(args.output_dir, 'output.log')
                                    il_weight_file = os.path.join(args.output_dir, 'il_model.pth')
                                    rl_weight_file = os.path.join(args.output_dir, 'rl_model.pth')
                                
                                    # configure logging
                                    mode = 'a' if args.resume else 'w'
                                    file_handler = logging.FileHandler(log_file, mode=mode)
                                    stdout_handler = logging.StreamHandler(sys.stdout)
                                    level = logging.INFO if not args.debug else logging.DEBUG
                                    logging.basicConfig(level=level, handlers=[stdout_handler, file_handler],
                                                        format='%(asctime)s, %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
                                    repo = git.Repo(search_parent_directories=True)
                                    logging.info('Current git head hash code: %s'.format(repo.head.object.hexsha))
                                    device = torch.device("cuda:0" if torch.cuda.is_available() and args.gpu else "cpu")
                                    logging.info('Using device: %s', device)
                                
                                    # configure policy
                                    policy = policy_factory[args.policy]()
                                    if not policy.trainable:
                                        parser.error('Policy has to be trainable')
                                    if args.policy_config is None:
                                        parser.error('Policy config has to be specified for a trainable network')
                                    policy_config = configparser.RawConfigParser()
                                    policy_config.read(args.policy_config)
                                    policy.configure(policy_config)
                                    policy.set_device(device)
                                
                                    # configure environment
                                    env_config = configparser.RawConfigParser()
                                    env_config.read(args.env_config)
                                    env = gym.make('CrowdSim-v0')
                                    env.configure(env_config)
                                    robot = Robot(env_config, 'robot')
                                    env.set_robot(robot)
                                
                                    # read training parameters
                                    if args.train_config is None:
                                        parser.error('Train config has to be specified for a trainable network')
                                    train_config = configparser.RawConfigParser()
                                    train_config.read(args.train_config)
                                    rl_learning_rate = train_config.getfloat('train', 'rl_learning_rate')
                                    train_batches = train_config.getint('train', 'train_batches')
                                    train_episodes = train_config.getint('train', 'train_episodes')
                                    sample_episodes = train_config.getint('train', 'sample_episodes')
                                    target_update_interval = train_config.getint('train', 'target_update_interval')
                                    evaluation_interval = train_config.getint('train', 'evaluation_interval')
                                    capacity = train_config.getint('train', 'capacity')
                                    epsilon_start = train_config.getfloat('train', 'epsilon_start')
                                    epsilon_end = train_config.getfloat('train', 'epsilon_end')
                                    epsilon_decay = train_config.getfloat('train', 'epsilon_decay')
                                    checkpoint_interval = train_config.getint('train', 'checkpoint_interval')
                                
                                    # configure trainer and explorer
                                    memory = ReplayMemory(capacity)
                                    model = policy.get_model()
                                    batch_size = train_config.getint('trainer', 'batch_size')
                                    trainer = Trainer(model, memory, device, batch_size)
                                    explorer = Explorer(env, robot, device, memory, policy.gamma, target_policy=policy)
                                
                                    # imitation learning
                                    if args.resume:
                                        if not os.path.exists(rl_weight_file):
                                            logging.error('RL weights does not exist')
                                        model.load_state_dict(torch.load(rl_weight_file))
                                        rl_weight_file = os.path.join(args.output_dir, 'resumed_rl_model.pth')
                                        logging.info('Load reinforcement learning trained weights. Resume training')
                                    elif os.path.exists(il_weight_file):
                                        model.load_state_dict(torch.load(il_weight_file))
                                        logging.info('Load imitation learning trained weights.')
                                    else:
                                        il_episodes = train_config.getint('imitation_learning', 'il_episodes')
                                        il_policy = train_config.get('imitation_learning', 'il_policy')
                                        il_epochs = train_config.getint('imitation_learning', 'il_epochs')
                                        il_learning_rate = train_config.getfloat('imitation_learning', 'il_learning_rate')
                                        trainer.set_learning_rate(il_learning_rate)
                                        if robot.visible:
                                            safety_space = 0
                                        else:
                                            safety_space = train_config.getfloat('imitation_learning', 'safety_space')
                                        il_policy = policy_factory[il_policy]()
                                        il_policy.multiagent_training = policy.multiagent_training
                                        il_policy.safety_space = safety_space
                                        robot.set_policy(il_policy)
                                        explorer.run_k_episodes(il_episodes, 'train', update_memory=True, imitation_learning=True)
                                        trainer.optimize_epoch(il_epochs)
                                        torch.save(model.state_dict(), il_weight_file)
                                        logging.info('Finish imitation learning. Weights saved.')
                                        logging.info('Experience set size: %d/%d', len(memory), memory.capacity)
                                    explorer.update_target_model(model)
                                
                                    # reinforcement learning
                                    policy.set_env(env)
                                    robot.set_policy(policy)
                                    robot.print_info()
                                    trainer.set_learning_rate(rl_learning_rate)
                                    # fill the memory pool with some RL experience
                                    if args.resume:
                                        robot.policy.set_epsilon(epsilon_end)
                                        explorer.run_k_episodes(100, 'train', update_memory=True, episode=0)
                                        logging.info('Experience set size: %d/%d', len(memory), memory.capacity)
                                    episode = 0
                                    while episode < train_episodes:
                                        if args.resume:
                                            epsilon = epsilon_end
                                        else:
                                            if episode < epsilon_decay:
                                                epsilon = epsilon_start + (epsilon_end - epsilon_start) / epsilon_decay * episode
                                            else:
                                                epsilon = epsilon_end
                                        robot.policy.set_epsilon(epsilon)
                                
                                        # evaluate the model
                                        if episode % evaluation_interval == 0:
                                            explorer.run_k_episodes(env.case_size['val'], 'val', episode=episode)
                                
                                        # sample k episodes into memory and optimize over the generated memory
                                        explorer.run_k_episodes(sample_episodes, 'train', update_memory=True, episode=episode)
                                        trainer.optimize_batch(train_batches)
                                        episode += 1
                                
                                        if episode % target_update_interval == 0:
                                            explorer.update_target_model(model)
                                
                                        if episode != 0 and episode % checkpoint_interval == 0:
                                            torch.save(model.state_dict(), rl_weight_file)
                                
                                    # final test
                                    explorer.run_k_episodes(env.case_size['test'], 'test', episode=episode)
                                
                                
                                if __name__ == '__main__':
                                main()

Aspects of the implementation of robots

Aspects of the implementation of robots
Performance

The algorithms described in this article are quite voracious computationally and often require a GPU. Therefore, depending on the requirements for the operating conditions of the robot, the developer must choose the correct version and optimize the code.

Possible options:
Calculation on a dedicated onboard computer
Calculation on an onsite computer
Cloud computing (on an offsite computer)


The choice made will dictate its requirements for both equipment and algorithm optimization.

Interaction

Robots operating autonomously in complex environments are governed by not one but a set of simultaneous algorithms. Therefore, when using algorithms in robots, developers will need to ensure fluid integration between all the algorithms in the system. This refers to interaction and cooperation between different algorithms.

Safety and security

Safety is of utmost concern when autonomous robots are operating in the direct vicinity of people. The robot must be programmed to react appropriately, with the possibility of unexpected actions by the people around it The presented review does not contain an exhaustive list of tasks and algorithms.

Read More

News

Robot Minds been chosen among hundreds of startups around the world to participate together with 6 other companies in one of the best Accelerator programs. We been working really hard to prove that our company has great potential. We are at the start of the road but the journey promise to be very exciting!

Elena Sjödin,co-founder Robot Minds

Our Products

Comming Soon

Starter Kit - $2900

Lorem ipsum dolor amet consectetur ut consequat siad esqudiat dolor

  • Basic Features
  • Up to 5 products
  • 50 Users Panels
Choose

Professional Kit - $ 4900

Lorem ipsum dolor amet consectetur ut consequat siad esqudiat dolor

  • Basic Features
  • Up to 100 products
  • 100 Users Panels
Choose

Advanced Kit - $7900

Lorem ipsum dolor amet consectetur ut consequat siad esqudiat dolor

  • Extended Features
  • Unlimited products
  • Unlimited Users Panels
Choose

Showcase

Robot Minds creates unique solutions for the implementation of commercial robots in the business. Our behavior programs are based on the experience of our customers and the demands of the modern market.

Portfolio Image

Bussiness case for retail

Robot sales assistant in a shop

Portfolio Image

Great solution for education

Our platform helps you interact with robot functions and settings

Portfolio Image

Vivid interest

Our technologies enable the robot to communicate

Portfolio Image

Accessible technologies

Robot minds makes development available

Portfolio Image

Service and help.

We provide support for our products

Portfolio Image

Attraction of attention

Your visitors will be delighted

Portfolio Image

Guide kit

I know the way follow me !

Portfolio Image

We ca chat

Every day we develop our 'perception and response' algorithms

Portfolio Image

Implementation of corporate software

We can connect corporate utilities to your assistant

Portfolio Image

Navigation kit

We taught robots to see

Portfolio Image

Customizable equipment

Adjust the robot to your needs

Clients Logo
Clients Logo
Top