4.3. AI in Industry

Artificial Intelligence (AI) is transforming numerous industries by enhancing efficiency, accuracy, and decision-making. This section explores how AI is being applied in three critical domains – Healthcare, Finance, and Autonomous Vehicles – highlighting current trends, technologies, and real-world case studies. We present an organized overview in a logical progression, reflecting both the recent advances and the chronological development of these applications.

Healthcare

AI has become an indispensable tool in healthcare, improving diagnostics and enabling personalized treatments. AI systems can analyze vast amounts of medical data (from imaging scans to genetic information) much faster and often more accurately than human clinicians. This capability addresses long-standing challenges such as diagnostic errors (estimated at ~5% in outpatient settings) by providing data-driven insights and predictions.

AI Diagnostics: In medical diagnostics, AI algorithms – especially deep learning models – excel at detecting patterns in images and clinical data that may be missed by the human eye. For example, AI image analysis can identify early signs of diseases in X-rays, MRIs, or CT scans with remarkable precision. Google DeepMind’s work offers a prime illustration: their AI system was trained on thousands of retinal OCT scans to detect over 50 eye diseases (like diabetic retinopathy and glaucoma). The system not only matched expert doctors in diagnostic accuracy, but even highlighted the scan features supporting its conclusions. In tests, the DeepMind AI recommended the same patient treatment as a panel of ophthalmologists 94% of the time – a ground-breaking result that underscores AI’s potential in clinical settings. AI-based diagnostics are also employed in pathology (examining blood samples or biopsies with AI microscopes) and in predictive analytics, such as forecasting disease outbreaks or a patient’s risk of complications based on electronic health records.

Personalized Medicine: Beyond diagnostics, AI is driving personalized medicine – tailoring treatment to individual patient characteristics. Machine learning models can process a patient’s genetic profile, lifestyle factors, and treatment history to recommend the most effective therapies. In oncology, for instance, AI systems suggest cancer treatment plans by cross-referencing a patient’s tumor genetics and medical literature for similar cases. IBM’s Watson for Oncology was an ambitious early effort in this domain: it used natural language processing to read millions of medical journal articles and analyze a patient’s records, aiming to recommend customized cancer treatments. The idea was that an AI could quickly provide oncologists with evidence-backed options, potentially improving outcomes and reducing trial-and-error in drug selection. This approach points to a future where AI helps doctors choose the right drug or dosage for each patient, minimizing side effects and maximizing efficacy based on data-driven predictions. Indeed, hospitals are beginning to deploy AI for tasks like predicting which patients are at highest risk of conditions like sepsis or readmission, enabling preventive care tailored to those individuals.

Case Studies: The real-world applications of AI in healthcare illustrate both its potential and challenges. IBM Watson Health made headlines by partnering with leading hospitals to apply AI in clinical decision support. In one project at MD Anderson Cancer Center, Watson’s AI was trained to recommend leukemia treatments by summarizing patient electronic health records and searching for relevant clinical trial data. While Watson showcased the promise of AI (one early demo showed Watson diagnosing rare conditions in seconds with supporting evidence), the implementation proved difficult. The MD Anderson trial was halted in 2016 after the hospital spent $62 million, as the system’s recommendations did not sufficiently match expert judgment and workflow needs. This underscores that high-quality data and seamless integration into clinical practice are critical for AI’s success. On the other hand, DeepMind’s diagnostic tools have had concrete research successes. DeepMind (now part of Google Health) developed an AI model with the U.K.’s Moorfields Eye Hospital that can detect retinal diseases from OCT scans as accurately as specialists. This tool can distinguish conditions like age-related macular degeneration and diabetic eye disease early, and even recommend referral decisions. Although DeepMind’s system is still being validated before routine clinical use, it represents a significant trend: AI assisting doctors in making faster, more accurate diagnoses. In sum, AI in healthcare is enabling more accurate diagnostics and personalized treatment plans, but effective deployment requires careful validation, integration with healthcare workflows, and addressing issues such as data quality and transparency in AI decision-making.

Finance

The finance industry was an early adopter of AI technologies, using them to automate complex tasks, manage risk, and detect fraudulent activities. Today, AI-driven algorithms execute trades in fractions of a second and monitor transactions across the globe in real time. Financial institutions leverage machine learning for everything from investment strategies to customer service chatbots, but two of the most impactful areas are algorithmic trading and fraud detection.

Algorithmic Trading: The majority of trading in stock and currency markets is now done by algorithms rather than humans. Algorithmic trading refers to using computer programs (often powered by AI) to automatically execute trades based on pre-defined strategies or real-time market signals. A specialized subset, high-frequency trading (HFT), involves algorithms that make thousands of trades per second, seeking to profit from tiny price discrepancies. AI enhances these systems by analyzing enormous amounts of market data and news faster than any person could, identifying subtle patterns or arbitrage opportunities. Over the past decade, the share of trading volume driven by algorithms has grown dramatically – by some estimates, more than half of U.S. equity trading volume in 2023 was attributable to high-speed algorithmic and AI-driven strategies. These AI models use techniques like deep learning and reinforcement learning to continuously improve their predictions of market movements. For example, an AI trading model might learn from historical data to forecast stock price trends, and then automatically trigger buy/sell orders based on its predictions. Investment firms and hedge funds have deployed such AI to optimize portfolios and execute complex strategies that react to market conditions in milliseconds. The chronological evolution of this field is evident: simple rule-based trading systems in the 1990s have given way to modern AI systems that learn and adapt. Importantly, AI allows trading algorithms to consider a wider range of inputs (social media sentiment, economic indicators, etc.) and to adjust on the fly, which can potentially increase returns. However, these advantages come with new challenges around market stability – rapid AI-driven trades have been linked to “flash crashes” and unpredictable market swings, prompting regulators to monitor algorithmic trading closely.

Fraud Detection: AI is also revolutionizing fraud detection and risk management in finance. Banks and payment companies deal with fraudulent transactions, money laundering, and cyber-attacks on a daily basis. Traditional rule-based fraud detection systems (e.g. flagging transactions over a certain dollar amount) can be rigid and generate many false alarms. AI offers a more flexible and intelligent approach: machine learning models are trained on vast datasets of legitimate and fraudulent transactions, learning the subtle differences between normal behavior and suspicious activity. Using both supervised learning (learning from known fraud cases) and unsupervised learning (detecting new anomalous patterns), AI systems can identify fraud in real time. For instance, if a credit card is suddenly used in an unusual location for a high-value purchase, an AI system can instantaneously analyze dozens of features (merchant, amount, location, customer history, etc.) and assign a fraud risk score. If the score is above a threshold, the system might automatically block the transaction or alert a human analyst. These decisions happen in seconds or less, enabling “real-time anomaly detection” at scale. PayPal’s fraud detection is a notable case – the company deals with billions of transactions and turned to AI to monitor them around the clock. PayPal uses machine learning models and even graph neural networks (which map connections between accounts) to spot organized fraud rings and coordinated attacks. By deploying an AI-based solution globally, PayPal was able to improve its real-time fraud detection accuracy by 10% while greatly reducing the workload on its human fraud investigators. This resulted in fewer false declines of legitimate customers and quicker responses to illegitimate activities. Similarly, banks like American Express reported significant improvements (e.g. 6% increase in fraud detection) by using AI models that continuously learn new fraud patterns. These percentages are substantial given the scale of financial transactions, translating to millions of dollars saved. Beyond transactions, AI also assists in risk assessment – for example, AI can analyze a customer’s behavior and credit history to generate a more nuanced credit score or to detect identity theft. Importantly, as fraud tactics evolve, AI systems can adapt by retraining on the latest data, whereas static rule systems would require manual updates. Financial firms now commonly maintain dedicated AI teams and data science units to keep their fraud models up-to-date and to comply with regulatory standards (like anti-money-laundering rules). The timeline of fraud prevention shows an escalating “arms race”: as fraudsters become more sophisticated (using stolen identities, synthetic accounts, etc.), banks counter with advanced AI, which in turn forces criminals to find new tricks. This dynamic makes AI an essential component in maintaining trust and security in the financial system.

Case Studies: Two examples illustrate AI’s diverse roles in finance. First, JP Morgan’s COiN AI (Contract Intelligence) demonstrates how AI can automate labor-intensive back-office processes. Announced in 2017, COiN is a machine learning system that reads and interprets legal documents – specifically, commercial loan agreements – far faster than human lawyers. By 2017 it was reported that COiN could review 12,000+ contracts in just a few seconds, a task that previously consumed 360,000 hours of lawyers’ time each year. The AI was trained to recognize patterns in clauses and pinpoint key data (like expiration dates or conditions) in contracts. By deploying COiN, JP Morgan not only saved time and operational costs, but also reduced errors that might occur in manual processing (human reviewers can overlook details, especially when fatigued). This case study highlights a trend of AI handling mundane, repetitive tasks in finance, freeing human employees to focus on higher-value work. The second example is PayPal’s fraud detection system, already touched upon above. PayPal’s platform, which processes over 35,000 transactions per minute, uses an AI-based fraud prevention engine that combines machine learning models with a huge dataset of past transactions. A noteworthy innovation by PayPal is using graph analysis – treating its entire customer transaction network as a graph of nodes and connections – so that if one account is flagged for fraud, the AI swiftly evaluates connected accounts or transactions for shared patterns. PayPal’s system has been so effective that it achieved a substantial increase in detection accuracy (as mentioned, a 10% lift) and has likely prevented millions of dollars in fraud losses. These case studies underscore that in finance, AI is not a futuristic concept but a present reality: from front-office trading algorithms executing in microseconds to back-office bots parsing documents, AI is deeply embedded in the industry’s daily operations.

Autonomous Vehicles

Perhaps the most visibly futuristic application of AI in industry is the development of autonomous vehicles. Self-driving cars leverage AI for perceiving the environment, making decisions, and controlling the vehicle without human input. In recent years, we have witnessed rapid progress in this field – moving from experimental prototypes to limited commercial services – although achieving full autonomy worldwide remains a work in progress. This section examines the core AI-driven technologies enabling autonomous driving, the current state and challenges in deployment, and case studies of leading players like Tesla and Waymo.

Core Technologies: An autonomous vehicle (AV) operates through a sophisticated pipeline of sensory input, AI-driven interpretation, planning, and action. At the foundation are the perception systems, which include an array of sensors to observe the environment. Modern self-driving cars are typically equipped with LiDAR (Light Detection and Ranging) units that emit laser pulses to create a precise 3D map of surroundings, radar sensors that measure object distances and speeds using radio waves, cameras that provide high-resolution visual input akin to human eyes, and ultrasonic sensors for near-distance obstacle detection (useful in parking and low-speed maneuvers). These sensors work concurrently to give the car a multi-modal view of the world: for example, cameras can read traffic lights and signs, LiDAR gives accurate depth information day or night, and radar can see through rain or fog better than cameras. The raw data from all sensors are fed into AI perception algorithms. Using techniques like convolutional neural networks (for image recognition) and sensor fusion, the AI identifies and tracks objects around the car – distinguishing cars, pedestrians, bicycles, lane markings, traffic signals, and more. This perception stage is essentially the vehicle’s “eyes and ears,” creating a constantly updating model of the external world. Next comes the decision and planning stage, effectively the “brain” of the autonomous car. Here, AI systems (often involving rule-based planners supplemented by machine learning for specific tasks) decide how the vehicle should behave based on the perceived environment. This includes path planning (determining the best route and lane position), speed control, and reactive maneuvers like stopping for a pedestrian or avoiding an obstacle. The planning AI must follow traffic laws and also predict the behavior of other road users – a complex challenge that involves probabilistic models and sometimes reinforcement learning (some systems learn optimal driving policies from simulations of traffic scenarios). Finally, the control stage translates the high-level decisions (“slow down for a turn” or “change lane to the left”) into low-level commands that operate the vehicle’s actuators. Control algorithms govern steering angle, throttle, and braking smoothly and reliably, akin to how a human driver’s brain sends signals to their hands and feet. All these components – perception, planning, control – must work in real time and with a high degree of reliability. The AI software is usually run on powerful onboard computers (often with specialized AI chips or GPUs) that can handle the intensive computations for sensor processing and neural network inference. Additionally, autonomous vehicles often utilize high-definition maps and GPS for localization, and may communicate with cloud services or other vehicles (Vehicle-to-Everything, V2X communication) to enhance their awareness beyond line-of-sight. Robust software architecture is crucial; for safety, many AV systems have redundant algorithms and fail-safes. In summary, the core technologies of self-driving cars combine advanced sensors and AI algorithms to perceive the environment and navigate – effectively replicating and surpassing the capabilities of a human driver in many respects.

Development Status: As of the mid-2020s, autonomous vehicle technology has matured from research labs to early commercial deployment, but it’s not yet ubiquitous. Companies like Waymo, Cruise, Baidu Apollo, and others have launched limited ride-hailing services using fully driverless cars in select cities. Waymo (an Alphabet/Google subsidiary), which started as Google’s self-driving car project in 2009, reached a significant milestone by operating a robotaxi service in Phoenix, Arizona. By 2023, Waymo’s driverless cars had provided over 700,000 rides to public passengers without a human safety driver on board. The vehicles navigate urban streets, pick up and drop off riders, and have logged millions of autonomous miles. Importantly, companies have been publishing safety data to demonstrate reliability: Waymo reported that over 7.1 million miles of fully autonomous driving (in Phoenix, San Francisco, and other areas), its cars had a lower crash rate than human-driven vehicles, with significantly fewer injuries – an estimated 17 fewer injury crashes compared to an equivalent human driving record. This suggests that, under certain conditions, AI drivers can be as safe or safer than average humans. Despite these advances, self-driving cars are not yet commonplace in most cities. The technology still faces major challenges: one is handling complex and unpredictable scenarios (known as edge cases) – for instance, interpreting hand gestures from a police officer directing traffic, or reacting to debris falling off a truck. Adverse weather conditions pose another challenge: heavy rain or snow can obscure sensors or create situations (like slippery roads) that are difficult for AI to reliably navigate. The AI must also anticipate human behavior; for example, predicting when a pedestrian might jaywalk or when a driver in another car will run a red light is extremely hard. These are situations humans navigate using intuition and social understanding, which AI struggles to replicate. Regulatory and safety validation is yet another challenge – proving that an autonomous vehicle is safe enough is a high bar, and different countries/states have different regulations on testing and deploying driverless cars. Tragically, there have been setbacks that illustrate the risks: in 2018, a self-driving test vehicle operated by Uber failed to recognize a pedestrian crossing the road at night and did not brake in time, resulting in a fatal accident. Investigations revealed the AI had detected the person 5.6 seconds before impact but misclassified her and never decided to stop. This incident led to a temporary pause in autonomous testing across the industry and underscored the need for better sensor fusion and safety protocols. It highlighted that AI systems can make errors in perception or prediction that humans might not, reinforcing the importance of rigorous testing and having safety fallbacks (such as attentive human safety drivers or automatic emergency braking). As of 2025, most experts agree that fully autonomous vehicles (Level 5) – which can drive anywhere in any conditions with no human input – are still some years away. However, Level 4 autonomy (fully driverless operation but within limited areas or conditions) is becoming a reality in certain locales (for example, geofenced urban centers with good weather and map data). The industry’s trajectory suggests a gradual expansion of these geofenced deployments as the technology improves. In the near future, we can expect more autonomous shuttles in campuses, self-driving trucks on specific highway routes, and taxi services in downtown cores of cities like San Francisco, Shenzhen, or Dubai. The chronological development shows steady progress: from the DARPA Grand Challenge experiments in 2004–2007 (which proved autonomous vehicles can handle desert trails) to the first self-driving car rides for consumers in the late 2010s, and now the early commercial robotaxis in the 2020s. Each year, AI software and sensor hardware improve (for instance, today’s AI models for self-driving are far better at pedestrian detection than those from five years ago), and these incremental gains bring the world closer to widespread autonomous transportation.

Case Studies: Two prominent players exemplify different approaches to autonomous vehicles – Tesla and Waymo. Tesla has integrated AI-driven autonomy features into its consumer electric cars, offering systems named Autopilot and Full Self-Driving (FSD) to customers. Autopilot (introduced in 2015) is an advanced driver-assistance system (ADAS) that provides Level 2 automation – it can steer within a lane, adjust speed via adaptive cruise control, and even change lanes or park on its own, but requires the human driver to remain attentive at all times. In other words, it is not a fully self-driving system, despite Tesla’s marketing of the “Full Self-Driving” add-on. In fact, Tesla’s FSD beta (as of 2025) extends Autopilot’s capabilities to city streets (navigating traffic lights, stop signs, intersections), yet the company acknowledges that the driver must be ready to take over at any moment and that the system may not handle every scenario. CEO Elon Musk has famously and repeatedly predicted that Tesla would achieve Level 5 autonomy (true driverless operation) “within a year or two” – predictions made as early as 2015 and many times since – but these goals have not been met to date, and Tesla vehicles on the road still operate at Level 2. Tesla’s approach relies on commodity sensors (it controversially decided to forego LiDAR and instead use cameras, radar, and ultrasonics, betting on vision-based AI). This has the advantage of lower cost and uses the millions of Tesla cars in customers’ hands to constantly collect driving data to improve the AI. However, Tesla’s strategy of deploying beta self-driving software to consumers has been scrutinized. There have been several crashes, some fatal, involving Tesla cars on Autopilot – for example, collisions with stationary emergency vehicles that the system did not recognize in time. These incidents led regulators (like the U.S. NHTSA) to investigate Tesla’s software for safety issues. Critics argue that terms like “Autopilot” and “Full Self-Driving” can mislead customers into overtrusting the system. Tesla, for its part, has released data claiming that when Autopilot is engaged, the crash rate per mile is lower than for manual driving, suggesting the feature can improve safety by reducing human error. The chronology of Tesla’s self-driving development shows both rapid innovation and the pitfalls of real-world testing: each software update expands the AI’s abilities (e.g., handling roundabouts or unprotected left turns), yet each year’s end finds the company still short of full autonomy. On the other hand, Waymo (and similarly, GM’s Cruise) represents a more conservative, infrastructure-heavy approach. Waymo’s self-driving cars (which include retrofitted Jaguar SUVs and custom-built “Waymo One” taxi vehicles) are equipped with top-of-the-line LiDARs, radars, and 360° cameras, and they operate in carefully mapped zones. After over a decade of development and testing millions of miles on public roads, Waymo launched a commercial driverless taxi service in the Phoenix metro area in 2020. By late 2023, Waymo expanded rider-only operations to parts of San Francisco and Los Angeles. The safety record so far is encouraging: Waymo published that its vehicles, in fully driverless mode, had only a handful of minor accidents over millions of miles, and zero fatalities. In one report, over 7 million miles of driverless operation resulted in just 3 minor injuries and significantly fewer police-reportable crashes than an average human-driven fleet. Waymo’s strategy emphasizes redundancy – they have backup systems for braking and steering, and their AI is intensively tested in simulation (Waymo famously runs billions of simulated miles to expose the AI to rare events like a pedestrian jumping in front of the car). The timeline of Waymo’s progress – from driving around a closed course to offering rides to paying customers – demonstrates the importance of extensive testing and gradual scaling. As of 2025, Waymo and Cruise are deploying robo-taxis in a few cities, and several companies are piloting self-driving trucks and delivery robots. In the next few years, we expect these case studies to evolve: Tesla aims to refine its consumer-level autonomous features (possibly reaching Level 3 where the car can handle most situations but the driver must take over given an alert), whereas Waymo and others will likely broaden their geofenced operations and improve AI efficiency. The competition and diversity in approaches are spurring rapid advancements in the field. One way to visualize an autonomous driving system is through a flowchart of its decision pipeline, as illustrated below:

Sensors (Cameras, LiDAR, Radar, etc.) --> Perception Module (object detection, localization) --> Planning Module (path planning, behavior decision) --> Control Module (steering, throttle, brake commands) --> Vehicle Actuators

Pseudocode Example – Simplified Autonomous Driving Loop:

# Continuously loop to drive autonomously
while True:
    sensor_data = get_all_sensors()                    # LiDAR point cloud, camera frames, radar signals
    world_model = perception_module(sensor_data)       # AI interprets sensors: detects objects, lanes, etc.
    trajectory = planning_module(world_model, goal)    # Plan path or maneuver based on current environment and destination
    control_actions = control_module(trajectory)       # Compute steering angle, throttle/brake to follow the path
    send_controls_to_vehicle(control_actions)          # Execute the control commands on the vehicle

This loop runs many times per second. The perception module might use a neural network to turn camera images into an identified list of objects (cars, pedestrians) with velocities, using LiDAR to get precise distances. The planning module could be an algorithm that decides when to change lanes or turn, possibly integrating rules (traffic laws) and learned policies for safety. The control module uses feedback control (like a PID controller or learned controller) to smoothly apply steering and braking so that the car follows the planned path. In a real system, additional considerations like fail-safes (e.g., an emergency stop if the perception is uncertain) and V2X communications (to react to traffic signals or other vehicles’ intentions) are also present. This pseudocode is a simplification, but it captures the essence of how AI components interact to enable a car to drive on its own.

Conclusion

Across healthcare, finance, and autonomous vehicles, AI has transitioned from a novel experiment to a core component driving innovation. In healthcare, AI augments doctors – improving diagnostic accuracy and enabling personalized patient care in ways that were impractical just years ago. In finance, AI operates at the speed of light, from executing trades on Wall Street to securing online transactions against fraud in real time. In the realm of autonomous vehicles, AI literally drives the future of transportation, inching us closer to a world where commute times can be productive or relaxing as cars drive themselves. These advancements did not happen overnight; they are the result of decades of cumulative research and development, with each breakthrough building on past successes and failures. It is also evident that with great power comes great responsibility – deploying AI in critical industries raises issues of safety, ethics, and oversight. Ensuring that AI decisions are transparent and fair (be it a medical AI explaining a diagnosis or a financial AI avoiding bias in loan approvals) is as important as the technical performance. The current trends show an accelerating adoption of AI: hospitals are integrating AI diagnostic tools, banks are launching AI-driven services, and tech companies are testing self-driving cars in more cities. Looking forward, one can expect even more cross-industry convergence – for example, techniques from healthcare AI might improve financial risk models (and vice versa) and the data-handling methods from finance might enhance how autonomous cars share information. For beginners and experts alike, understanding these trends is crucial, as AI in industry is not just a technological shift but a societal one. We are witnessing a transformation in how decisions are made: increasingly, humans and AI systems are making those decisions together, whether it’s a doctor and an AI diagnosing a patient or a driver relying on an AI co-pilot on the highway. As AI continues to learn and evolve, its applications will undoubtedly broaden, but the cases outlined in healthcare, finance, and autonomous vehicles stand at the forefront of today’s AI revolution, demonstrating tangible benefits and offering lessons for successful integration of AI into the fabric of industry.

References

  1. J. Valentine, “How AI is Transforming Healthcare Diagnostics: The Power of IBM Watson and Google DeepMind,” Medium, Sep. 9, 2024. [Online]. Available: medium.com/@johnvalentinemedia/how-ai-is-transforming-healthcare-diagnostics-the-power-of-ibm-watson-and-google-deepmind-67b86dc43008.

  2. J. Vincent, “DeepMind’s AI can detect over 50 eye diseases as accurately as a doctor,” The Verge, Aug. 13, 2018. [Online]. Available: https://www.theverge.com/2018/8/13/17670156/deepmind-ai-eye-disease-doctor-moorfields.

  3. E. Strickland, “How IBM Watson Overpromised and Underdelivered on AI Health Care,” IEEE Spectrum, Apr. 2, 2019. [Online]. Available: https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care.

  4. S. Seth, “The World of High-Frequency Algorithmic Trading,” Investopedia, Updated Sep. 18, 2024. [Online]. Available: https://www.investopedia.com/articles/investing/091615/world-high-frequency-algorithmic-trading.asp.

  5. IBM, “AI Fraud Detection in Banking – Use Cases and Challenges,” IBM Think Blog. [Online]. Available: https://www.ibm.com/think/. (accessed Jun. 15, 2025).

  6. D. Galeon, “An AI Completed 360,000 Hours of Finance Work in Just Seconds,” Futurism, Mar. 8, 2017. [Online]. Available: https://futurism.com/an-ai-completed-360000-hours-of-finance-work-in-just-seconds.

  7. Q. Z. Ahmed, “System Architecture for Autonomous Vehicles,” Encyclopedia (MDPI), adapted from Sensors, vol. 21, no. 3, article 706, 2021. [Online]. Available: https://encyclopedia.pub/entry/8473.

  8. Waymo Team, “Waymo significantly outperforms comparable human benchmarks over 7+ million miles of rider-only driving,” Waymo Official Blog, Dec. 20, 2023. [Online]. Available: https://blog.waymo.com/2023/12/waymo-significantly-outperforms-human-benchmarks.html.

  9. A. J. Hawkins, “Uber driver in first-ever deadly self-driving crash pleads guilty,” The Verge, Jul. 31, 2023. [Online]. Available: https://www.theverge.com/2023/7/31/23814474/uber-self-driving-fatal-crash-safety-driver-plead-guilty.

  10. “Tesla Autopilot,” Wikipedia, the free encyclopedia, https://en.wikipedia.org/wiki/Tesla_Autopilot (accessed Jan. 2025).

  11. M. Euler, “U.S. to probe Tesla’s ‘Full Self-Driving’ system after pedestrian killed,” Associated Press (via NPR), Oct. 19, 2024. [Online]. Available: https://www.npr.org/2024/10/19/g-s1-29030/us-probe-tesla-full-self-driving-system.

댓글

이 블로그의 인기 게시물

Expert Systems and Knowledge-Based AI (1960s–1980s)

Core Technologies of Artificial Intelligence Services part2

3.1.4 Linear Algebra and Vectors