FAQ

Generally, the accuracy of the model is influenced by: the accuracy of data and its sufficiency, as well as the correct selection of the model. To try again, it is enough to either work additionally with data, clear it, or accumulate historical data for a longer period. Mathematics are responsible for the correct choice of the model, so if there are no people, you can contact our company. We have highly qualified Data Scientists who are ready to work on your model.

The development of cloud computing and services makes it possible to use forecast models not only for corporations, but also for small and medium-sized businesses. The solutions are sold by subscription (SaaS model) and do not require capital investments at the initial stage, for example, such as the purchase of servers or licenses. For instance, the product for demand forecasting by machine learning methods, developed by Beltel Datanomics in order to minimize returns of bakery products, allowed to save 500,000 rubles in production per month, and the cost of the subscription solution (SaaS) did not exceed 50,000 rubles per month.

Yes we can can. However, the advisability of using its own servers for building intelligent systems is controversial. The fact is that the creation of solutions based on AI has a number of standard stages only three of which actively use computing powers:

  1. AI model training
  2. The work of the trained AI model
  3. Additional training AI model on new data

At the same time, these stages require different types of computing resources. In particular, stages 1 and 3, when the AI ​​model is actively learning, require the maximum speed, the use of specialized GPU accelerators, etc., these stages take several hours, sometimes days. The additional training of the model requires several hours per month. The work of the finished model requires standard resources. Thus, it turns out a dilemma: to buy powerful servers, so that the training of the model passed for an acceptable time and then use these servers for 5% of capacity, or buy standard servers and train the model for weeks.

The best option is the combined use of the cloud and its own infrastructure, when capacities in the cloud are leased for training (for example, for several hours per month), after which the trained model is transferred to work on its own servers inside a closed infrastructure. 

Generally, modern ERP-systems have a built-in forecasting module, or the manufacturer offers a separate product that is easily integrated with the ERP-system for planning and budgeting. Any module cannot work by itself, it requires separate configuration, sometimes the expansion of the functionality by the developers, so that the system meets business needs. For the accuracy of forecasts, it is not enough to use only internal company data, historical information, it is important to take into account all sorts of external factors that may influence demand for example. For these purposes, models constructed by methods of machine learning are used. They allow you to solve planning problems quickly and at minimal cost. In particular, the demand forecasting solution built on the Microsoft Azure platform can be deployed in two weeks if the provided data quality is good, and further significantly reduce business expenses by fully automating the planning process - analytical departments will simply not be needed, everything will be done by a mathematical device, configured in the cloud.

Our team worked in various directions of using the Big Data. For example, an auto order solution for a full-cycle meat processing plant built on the basis of the Datanomics Demand Forecast (DDF) product generated 150,000 rubles of monthly net income (cost reduction minus subscription cost), by automating the work of the analytical department, analysts' vacancies became simply not needed, and some specialists were transferred to other tasks. In addition, the solution allowed us to optimize the additional deliveries to the outlets due to accurate forecasts, thereby increasing sales by some positions by 10%, and reduce spoilage by 6%.

Industry demands computer vision. The solution based on the Datanomics Industrial Video Analytics (DIVA) product for industrial analytics made it possible to detect harmful emissions in the coke-chemical production and to notify the responsible personnel who timely eliminated the problem, thereby not disturbing the environment.

The period depends highly on the task. If this is demand forecasting, where there is a pronounced seasonality, then we need information about at least two seasonal repetitions, i.e. more than two years. If this is the process forecasting, then the required data is measured by the number of completed processes and their diversity, rather than the calendar period. Although if the technological process is also affected by, for example, the temperature and humidity of the environment, then there will also be the binding to the calendar.

Formulate the terms of reference for a complex intellectual system is important and labor-intensive task. It is necessary to take into account the requirements for data collection and storage, formulate criteria for estimating the result of the forecast, and embedding it in the business process. Specialists of Beltel Datanomics have experience writing such terms of references and will be happy to help you.

We recommend starting with an assessment of priorities. Understanding which task is the most time-consuming and critical for business. Make a rating of such "problems". Then evaluate the availability of data to solve this problem. If it is difficult to do this on your own, specialists of Beltel Datanomics will be happy to advise you.
If you have quality data, we can deploy the solution in 4 weeks.
We have specialists from various fields in our team. In order to make a solution that has value for business, it is not enough to be able only to write program code. You need to understand the customer's business processes, understand the problem and accurately formulate the task, evaluate the quality of the data, find the right model, find factors that affect the model's accuracy, further integrate the solution into the client's accounting systems, make the user-friendly interface and visualization. Therefore, this problem is worked on by Data Scientist, software developers, financial and operational specialists, marketers and engineers.
You can start. That amount of data can be enough for some tasks. However, if there is any seasonality or "holiday" peaks in the data, then their prediction can be difficult. In other words, the amount of data is directly proportional to the quality (accuracy) of the forecast.
Unfortunately no. The value of the forecast is in its accuracy and accuracy is directly proportional to the amount of data. Mixing the data for a month in one boiler, we lose their sufficiency. Such solution will have no value for business.