Content |
Software development is a kind of activity (profession) and a process aimed at creating and maintaining the performance, quality and reliability of software using technologies, methodology and practices from computer science, project management, mathematics, engineering and other fields of knowledge.
Reliability and quality of computer programs
The reliability of computer programs is one of the main qualities of not only computer programs, but also any type of equipment, devices, machines and devices used in everyday life, in life, transport, communications and in production.
The main reliability criteria in technology:
- fault tolerance,
- maintainability,
- survivability.
Fault tolerance is the absence of failures and failures in the operation of machines, devices and devices. Including - the absence of failures and failures in the operation of computers - electronic computers (computers). The presence of failures and failures in the operation of machines, devices and devices indicates their insufficient or low reliability.
Maintainability - the ability to make corrections to machines, devices and devices when failures, failures or other defects are detected in them. Including - in computers, computing devices, machines, systems and complexes.
Features of programs for computers - programs do not break and are not subject to wear in contrast to all other equipment, devices, machines and devices. In this sense, computer programs have exceptional resilience.
The presence of failures and failures in the operation of computers during the execution of programs are their design errors and defects laid down during the development or debugging of programs on computers. The number of errors and the duration of debugging programs is usually considered unknown.
In no form of science and technology is there such a situation with the detection and correction of errors and defects, as in programming and development of software for computers. One of the largest foreign firms announced the appearance of incorrigible errors in their software products???
Software
Software products are programs for computers supplied to consumers for its installation and use on computers or in computer networks. Software products must be one of the types of industrial products and meet the requirements of international ISO standards.
One of the main requirements of international ISO standards for industrial products is its reliability of supplied devices, machines and devices, and in particular, first of all - fault tolerance and the absence of design errors and defects in the supplied industrial products.
According to the Law of the Russian Federation "On Protection of Consumer Rights," any product, if defects are found in it, must be replaced or returned for its purchase and associated costs.
The software product in accordance with international ISO requirements should not contain errors and defects that lead to failures and failures when executing these programs on computers or in a computer network.
Programs with defects and errors are not a product.
- HAC Prof., Doc.Comp.Science 11:34, August 3, 2009 (UTC)
Program Development Milestones
software development deals with problems of quality, cost and reliability. Some programs contain millions of lines of source code that are expected to execute correctly under changing conditions. The complexity of the software is comparable to the complexity of the most complex of modern machines and space projects.
Programming technologies include:
- Analyze and set tasks.
- Design - Develop BOMs.
- Design - development of algorithms
- Write the source code of the programs.
- Test and debug programs.
- Tests and handover of programs
- Program support
Most of the work of programmers is related to writing source texts of programs in one of the programming languages, as well as testing and debugging programs on computers.
The source code of the programs, as well as executable files, are objects of copyright and are the intellectual property of their authors and copyright holders.
Structural design of programs
Computer science as a scientific discipline offers and uses, on the basis of structural programming methods, reliable software development technology, using program testing and verification based on evidence-based programming methods to systematically analyze the correctness of algorithms and develop programs without algorithmic errors.
This programming methodology is aimed at solving problems on computers, similar to the technology for developing algorithms and programs used at programming olympiads by domestic students and programmers using testing and structural pseudocode to document programs at IBM Corporation since the 70s.
The methodology of structural software design can be used using a wide variety of languages and programming tools to develop reliable programs for a wide variety of purposes.
One of the big projects of reliable development of reliable software was the development of on-board software for the Buran spacecraft, which for the first time used an on-board computer to automatically control the spacecraft, which made a successful launch and landing of the spacecraft.
Training in Programming Technologies
Training in software development technologies based on the methodology of structural program design was started in the early 80s at MIEM in the training of mathematical engineers in the specialty "Computer Software" and is presented in our computer science and programming textbooks.
The greatest success is the development of basic packages of computer science programs for domestic and imported personal computers - BC, Corvette, UKNC, Yamaha and IBM PC, which dispersed throughout the country in the form of free and open source software in the late 80s.
Since the beginning of the first year, all MIEM students have mastered and master pseudo-codes to describe algorithms and document all developed programs in the languages Pascal, Basic, C, Fortan, PL/1, etc., etc., and from the third or even second year they start developing software.
The greatest success was achieved during the training of mathematical engineers at MIEM and economic engineers at MATI, who already in their first year began developing programs with evidence of the correctness of the algorithms being compiled regarding the mathematical statements of the problems being solved.
Examples of solving problems with the development of algorithms and evidence of their correctness are presented in the university and school textbooks of computer science Kaymin, which distributed throughout our country in millions of copies and entered the standards formations as specifications of the Unified State Exam in computer science.
50 thousand schoolchildren in 2009. successfully passed the exam in computer science, the specifications of which laid the foundations of algorithmization, logic, analysis of the correctness of algorithms and elements of programming technology - the foundations of modern professional programming.
Monolith or microservices. Which IT architecture is preferred by large companies in Russia
Monoliths and microservices are the embodiment of two opposite approaches to building an application architecture. The former are a single product with a common code base. The second is the "community" of independent modules. Everyone has pros and cons, and TAdviser talked about them with the participants in the IT market.
In the debate about which approach to the architecture of IT systems to choose - monolithic or microservice - there is no definite answer. According to experts interviewed by TAdviser, the choice of the appropriate architecture depends on specific business goals and technical requirements. For example, for software used on ships, satellites, in APCS or hardware and software complexes, a high update speed is not required, so it is possible to very well and efficiently implement functional on a simple platform in the form of a monolith, says Dmitry Ovchinnikov, head of the Laboratory for Strategic Development of Cybersecurity Products at the Gazinformservice Cybersecurity Analytical Center.
The advantages of microservices are that an IT product can be created from different modules that can be implemented separately, in different programming languages, different frameworks and different teams of programmers.
All this allows you to use different technologies and, if necessary, change them to others. It turns out a kind of constructor-lego in which you can change bricks. There is only one minus: for such a speed of work, you need to organize a normal CI/CD, take a devops to the staff and organize competent pipes, - said Ovchinnikov. |
The IT company Develonika (Softline Group of Companies), speaking of the advantages of microservices, noted that they reduce the customer's risks associated with the fault tolerance of its systems, vital processes do not stop. There is more freedom in choosing custom development tools.
Companies remain adherents of monolithic models, for which the main thing is control, reliability and safety. The basis of the target audience of the monolith is enterprises in the field of industry (especially in the PCS, CAD and PLM systems), power and transport, said Nikita Kardashian, head of the practice of integrated digitalization of Naumen processes.
The monolith is also used in ERP enterprise management systems (SAP, 1C, etc.). If a payroll accounting system, a corporate portal and an electronic document management service appear around such platforms, then this architecture cannot be called microservice, since each component that is part of it is a separate large monolith that performs a whole range of business tasks and services, explained the architect of information systems in IBSergey Polityko.
The main drawback of monolith systems is scalability. Systems are growing, more users appear, more services and services are required, and the monolith is not adapted to rapid evolution, says Pavel Ivanov, director of the software solutions development department at Jet Infosystems. If a monolithic system falls, everything stops working, and if one microservice fails, the user does not even notice problems with the platform.
However, microservice architecture requires additional costs for maintenance, additional infrastructure , and expensive specialists. As Pavel Ivanov said, large companies from the retail or finance sector, which need to quickly solve business problems, hire 5-7 teams at once, which quickly make many separate fragments of a large system to launch the development product in the right time.
The problem with the microservice approach is that it brings together "vertical" independent teams to launch and develop each digital product, which includes business and IT experts. This creates the need for business experts to have a certain level of understanding in the features of IT implementations, to understand methodologies, to be able to communicate with IT experts in the same language, said Grigory Gashnikov, IT architect of the Technological Transformation practice at Reksoft Consulting.
According to Maxim Pavlyukevich, engineer for automation of the development, testing and deployment of ICL Services software, with all the disadvantages of microservices (complexity in management and testing, high costs for IT infrastructure and project management) for large companies, this model is practically uncontested at the moment and is, in fact, an industrial standard.
Large IT companies like Yandex and Ozon mainly choose microservice architecture, co-founder of Pragmatic Tools Andrei Arefiev knows . According to him, this choice is due to the fact that companies operating in the B2C market do not know in advance how much capacity needs to be allocated for data processing and how much data will have to be processed.
Some leading banks use microservice architecture in their products. Among them are VTB, Sberbank, OTP Bank and MKB, said the executive director of ITFB Group Vadim Kazantsev. According to him, large credit organizations see in microservices the benefit of integration with external services, such as deduplication of client data or speech recognition.
Most large companies still opt for a monolith, since the introduction of microservices requires extensive experience and deep expertise, says Vladimir Vigura, director of the Nota Modus product (T1 holding). Nevertheless, key market players are trying to choose different architectures, based on the peculiarities of the IT landscape, the frequency of changes made to products and the limited resources, he added.
According to Oleg Chebulaev, general director of the IT company Mad Brains (developer of mobile applications and services), large companies in the Russian Federation choose PaaS microservices, but with already configured integration tools. The idea is that all the components that are needed to create an application are packed into separate virtual containers. They can be quickly called up, launched and, if necessary, added computing power to scale the project, Chebulaev explained.
Also in Russia, technologies are gaining popularity that combine the advantages of monolithic architecture and modularity. Among them is Spring Modulith. As Pavel Myshev, Deputy Director for Key Clients of Cinimex, explained to TAdviser, the main idea is to develop the application as a single monolith, but with clearly defined modules that can be developed, tested and deployed independently of each other.
Mixed architecture is used in cases where it is necessary to develop any separate custom service that will be associated with another monolithic application, said Dmitry RossikhinRDN Group director. He added that large RDN Group customers tend to opt for a monolith.
Chronicle
2023
10% of software developers in the world use AI programming assistants
As of 2023, approximately 10% of software developers globally use various artificial intelligence programming assistants. And approximately 63% of organizations in the world test or implement AI code creation tools. This is stated in a study by Gartner, the results of which were published on April 11, 2024.
Analysts emphasize that AI-based code writing assistants provide a number of advantages that go beyond the actual generation and completion of program lines. Such tools, in particular, increase the efficiency of the development process by stimulating brainstorming. As a result, the qualifications of programmers are improved. In addition, job satisfaction is increasing, which contributes to the preservation of employees: this allows companies to reduce the costs associated with personnel turnover. At the same time, the quality of program code is improved thanks to automated search for errors and possible problematic algorithms: this contributes to improving reliability and security. AI technologies, as noted, help accelerate the launch of software products to the commercial market and increase customer satisfaction.
At the same time, the study Gartner talks about the need to rethink the ROI indicators when introducing AI code creation assistants. According to experts, traditional approaches to evaluating efficiency are not able to cover the full value of AI-based assistants. Among the key advantages of such tools are the acceleration of programming, reduced development time and cost savings.
Traditional approaches to assessing ROI push executives to analyze cost-cutting-oriented metrics. This narrow direction does not fully assess the impact of the introduction of AI-based code writing assistants, "says Filip Walsh, senior analyst at Gartner. |
AI tools are implemented by companies to find and correct errors in the code (automatic code validation, suggestions for changes in the code by text comments, etc.), as well as to generate draft versions of the code. According to Yakov & Partners, an assistant for writing code based on generative AI (Genia) can increase the productivity of developers and save them from 10-15% of routine tasks using various functions - from prompting to one line of code to optimizing the entire script. AI models are being introduced that can write code in different programming languages. Such tools are able to generate code from a regular text job describing the operation of a function, class or script that the programmer is going to write. At the same time, multi-agent systems are developing with a combination of several small AI models. These agents communicate with each other through natural language messages to perform various tasks. For example, one agent generates program code based on user requests, and the second acts as a code reviewer, eliminating inaccuracies in it and increasing its effectiveness.
Gartner analysts believe that in the future, the demand for AI programming assistants will grow rapidly. According to forecasts, by 2028, approximately 75% of corporate software engineers will use the corresponding tools. Genia technologies will take over the execution of routine operations, and the developers will be able to use the time freed up to solve more complex problems.[1]
How New Programming Tools Are Changing the Development Market - McKinsey
Next-generation programming tools change the capabilities of specialists and engineers at every stage of the software development life cycle - from planning and testing to deployment and maintenance. This is stated in a study by McKinsey, the results of which were released on July 20, 2023.
We are talking about platforms with low-code and no-code, generative artificial intelligence tools, the concept of "infrastructure as code," automatic integration tools, etc. Using such solutions simplifies complex tasks, as well as helps in identifying errors and optimizing applications. At the same time, generative AI is able to independently create separate code fragments, reducing the need for manual work.
According to McKinsey estimates, in 2022, investments in the market for new programming tools amounted to approximately $2 billion. The number of vacancies in the relevant area is growing rapidly: during 2022 it increased by 29% compared to the previous year. At the same time, the highest growth is in the positions of software developers and data engineers.
It is noted that new generation software can use AI to automate testing, which helps reduce the time that developers spend on this task. On the other hand, the rejection of traditional checks by people can lead to an increase in the number of errors in the software. Another problem with the introduction of new programming methods is that comprehensive monitoring and version control can be complicated due to uncoordinated changes and updates made by different development teams. The introduction of new tools can take a long time due to technical difficulties, the need for large-scale retraining of developers and engineers, as well as other organizational obstacles.[2]
Cloud and other technologies impact IT teams - IDC
The widespread adoption of cloud computing and modern development technologies will have a significant impact on the composition of IT teams in various areas of the global market. This is stated in the IDC report, published on June 5, 2023.
Analysts note that a sharp increase in the demand for services provided under the "as a service" model, as well as the convergence of modern development platforms, lead to the formation of "hybrid" positions. In particular, many of the developers perform not only their immediate duties, but are also responsible for other operating functions. This contributes to the comprehensive transformation of IT departments. Something similar was observed in the era of the birth of the commercial Internet and the rapid development of websites.
IDC highlights a number of key areas in IT development. This is, in particular, DevOps - a methodology for active interaction of developers with information technology service specialists and mutual integration of their workflows into each other to improve product quality. DataOps, the concept of enterprise data management in the AI era, is also of great importance: it allows you to transfer the DevOps experience to data management and analytics. In addition, the influence of DevSecOps is growing - an expanded practice of DevOps, including information security technologies.
ITOps and MLOps are also called. The first of the two areas covers many different areas of IT, including server and PC administration, network management, end-user support, etc. MLOps is a set of practices aimed at reliable and effective deployment and maintenance of machine learning models. Among other important roles, IDC analysts highlight platform engineering (aimed at self-service of business units, partners and customers), ensuring the smooth operation of high-load services (SRE, one of the forms of DevOps implementation) and system administration.
IDC predicts that in the period from 2022 to 2027, the demand for specialists in the fields of DataOps and MLOps will demonstrate a CAGR of 17.9% and 20.1%, respectively. Analysts believe that the importance of the roles of DevOps and DevSecOps will also increase, and in the case of DevSecOps, double-digit growth is expected during the period under review. This segment is waiting for rapid development against the background of a worsening situation in the field of information security: the competitiveness and efficiency of organizations increasingly depend on the capabilities of the applied. ON Implementing protection early in the long term will help reduce costs and improve the quality of products and services. However, the growth in the DevOps segment will not be so significant, since some of the functions of such specialists will be taken over by employees of platform engineering departments.
In general, IDC notes, by 2027 there will be a significant change in the duties of IT specialists at the macro level. At the same time, the importance of roles in the areas of ITOps and system administration will decrease with the CAGR at -8.2% and -7.8%, respectively.
The results of the study suggest that IT professionals who perform traditional operational functions are increasingly faced with the transition to positions that require certain technical skills. In many cases, this means some experience in software development, the IDC report says.[3] |
See also
- supercomputers
- Algorithmization
- Programming
- Structural Pseudocode
- Programming methodology
- IT Project Management
- open source software
- evidence-based Programming
Literature
- Naur. Programming science. M., Mir, 1982.
- Tursky M. Programming methodology. M., Mir, 1981.
- Dijkstra E. Programming discipline = A discipline of programming. - 1st ed. - M.: Mir, 1978. - S. 275.
- Robert V Sebesta "Basic Concepts of Programming Languages," 5th edition: Per. from English - M: 2001. - 672 pp., "Williams"
- Ian Sommerville "Software Engineering," 6th Edition: Per. from English - M.: 2002. - 624 pp., "Williams"
- V.A. Kaymin . Methods of developing programs in high-level languages. M., MIEM, 1985.
- V.A. Kaymin . Basics of evidence-based programming. M., MIEM, 1987.
- Kaymin V. A. Informatics. Textbook for students. M., INFRA-M, 1998-2009.
- Kaymin V. A. Informatics. Textbook for schoolchildren. M., Progress, 2007-2009.
Internet sources
- Evidence Programming Technologies
- Prologue and Logical Programming
- Informatics in Schools and Universities
- Olympiad in Computer Science and Programming
- Informatics: USE and computer exams
- Linux Software