-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathgoogle.html
More file actions
59 lines (56 loc) · 4.58 KB
/
google.html
File metadata and controls
59 lines (56 loc) · 4.58 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
<!DOCTYPE html>
<html style="background-color:rgb(250, 243, 237);" >
<head>
<title>
GOOGLE
</title>
<style>
.dropdown {
position: relative;
display: inline-block;
}
</style>
</head>
<body>
<div class="dropdown" style="float: center;position: sticky;top: 0cm;font-size: 1cm;background-color:rgb(58, 58, 58);color: ghostwhite;"><b>Google’s medical AI was super accurate in a lab. Real life was
a different story.</b></div>
<hr>
<center><img src="google1.jpg" height="320" width="600"></center>
<p style="font-size: .5cm;">If AI is really going to make a difference to patients we need to know how it works when real humans get their hands on it, in real situations.</p>
<hr>
<p style="font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;">
The covid-19 pandemic is stretching hospital resources to the breaking point in many countries in the world. It is no
surprise that many people hope AI could speed up patient screening and ease the strain on clinical staff. But a study
from Google Health—the first to look at the impact of a deep-learning tool in real clinical settings—reveals that even
the most accurate AIs can actually make things worse if not tailored to the clinical environments in which they will work.
</p>
<p style="font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;">
Existing rules for deploying AI in clinical settings, such as the standards for FDA clearance in the US or a CE mark
in Europe, focus primarily on accuracy. There are no explicit requirements that an AI must improve the outcome for
patients, largely because such trials have not yet run. But that needs to change, says Emma Beede, a UX researcher
at Google Health: “We have to understand how AI tools are going to work for people in context—especially in health
care—before they’re widely deployed.”
</p>
<p style="font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;">
Google’s first opportunity to test the tool in a real setting came from Thailand. The country’s ministry of health has
set an annual goal to screen 60% of people with diabetes for diabetic retinopathy, which can cause blindness if not caught
early. But with around 4.5 million patients to only 200 retinal specialists—roughly double the ratio in the US—clinics are
struggling to meet the target. Google has CE mark clearance, which covers Thailand, but it is still waiting for FDA approval.
So to see if AI could help, Beede and her colleagues outfitted 11 clinics across the country with a deep-learning system
trained to spot signs of eye disease in patients with diabetes.
</p>
<p style="font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;">
In the system Thailand had been using, nurses take photos of patients’ eyes during check-ups and send them off to be looked
at by a specialist elsewhere—a process that can take up to 10 weeks. The AI developed by Google Health can identify signs of
diabetic retinopathy from an eye scan with more than 90% accuracy—which the team calls “human specialist level”—and, in
principle, give a result in less than 10 minutes. The system analyzes images for telltale indicators of the condition, such
as blocked or leaking blood vessels.
</p>
<p style="font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;">
Sounds impressive. But an accuracy assessment from a lab goes only so far. It says nothing of how the AI will perform in the
chaos of a real-world environment, and this is what the Google Health team wanted to find out. Over several months they
observed nurses conducting eye scans and interviewed them about their experiences using the new system. The feedback wasn’t
entirely positive.
</p>
</body>
</html>