Machine Learning Applied During the Design Phase

Image

Description:

There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI. Systems using sub-symbolic AI such as machine learning may behave unpredictably and are more prone to inscrutability in their decision-making. This is especially true if a situation is encountered that was not part of the AI's training dataset, and is exacerbated in environments that are less structured. Undesired behaviour may also arise from flaws in the system's perception (arising either from within the software or from sensor degradation), knowledge representation and reasoning, or from software bugs. They may arise from improper training, such as a user applying the same algorithm to two problems that do not have the same requirements. Machine learning applied during the design phase may have different implications than that applied at runtime. Systems using symbolic AI are less prone to unpredictable behaviour. Psychosocial hazards are those that arise from the way work is designed, organized, and managed, or its economic and social contexts, rather than arising from a physical substance or object. They cause not only psychiatric and psychological outcomes such as occupational burnout, anxiety disorders, and depression, but they can also cause physical injury or illness such as cardiovascular disease or musculoskeletal injury. Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization, in terms of increasing complexity and interaction between different organizational factors. However, psychosocial risks are often overlooked by designers of advanced manufacturing systems. AI is expected to lead to changes in the skills required of workers, requiring training of existing workers, flexibility, and openness to change. The requirement for combining conventional expertise with computer skills may be challenging for existing workers. Over-reliance on AI tools may lead to deskilling of some professions.

Increased monitoring may lead to micromanagement and thus to stress and anxiety. A perception of surveillance may also lead to stress. Controls for these include consultation with worker groups, extensive testing, and attention to introduced bias. Wearable sensors, activity trackers, and augmented reality may also lead to stress from micromanagement, both for assembly line workers and gig workers. Gig workers also lack the legal protections and rights of formal workers. There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours. Algorithms trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Information asymmetry between management and workers may lead to stress, if workers do not have access to the data or algorithms that are the basis for decision-making. In addition to building a model with inadvertently discriminatory features, intentional discrimination may occur through designing metrics that covertly result in discrimination through correlated variables in a non-obvious way. In complex human‐machine interactions, some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead.