Pi Passion

The Illusion of Truth in the Age of AI
SLIDE 2 — Introduction What is the “Illusion of Truth”? Our brain tends to believe something simply because it looks real. Repetition makes information feel true, even if it’s false. Why this matters today: Artificial Intelligence creates ultra-realistic fake videos, photos, and voices. It is harder than ever to know what is real. SLIDE 3 — What Are Deepfakes? Definition: Deepfakes = Fake images, videos, or voices created using Artificial Intelligence (deep learning). They can replace: Someone’s face Someone’s voice Entire events or scenes Examples: Fake Obama speech (Jordan Peele, 2018) Fake Tom Cruise TikTok videos SLIDE 4 — Why Deepfakes Fool the Brain Visual dominance We trust what we see more than what we read. Familiarity effect If the face looks familiar, we believe it. Repetition makes lies feel true Seeing a video many times → becomes “true” in our mind. Authority bias We trust information more when shared by a celebrity or verified account. SLIDE 5 — Real Incident #1: Fake Pentagon Explosion (2023) AI-generated image showing an “explosion” near the Pentagon. Shared by verified accounts → including Elon Musk. Stock market briefly dropped. Later confirmed FAKE by CNN, Reuters, and the U.S. Department of Defense. Why this matters: Even trusted people can spread fake AI content. SLIDE 6 — Real Incident #2: Pope Francis in Balenciaga Coat (2023) AI image of the Pope wearing a white designer jacket. Millions believed it. Shared globally before fact-checkers corrected it. Shows: Deepfakes can go viral even when harmless. SLIDE 7 — Real Incident #3: Fake Zelenskyy Surrender Speech (2022) Deepfake video showed President Zelenskyy telling soldiers to surrender. Posted during the war in Ukraine. Quickly debunked by BBC, Forbes, and several governments. Shows: Deepfakes can influence war, politics, and public opinion. SLIDE 8 — Real Incident #4: AI Voice Scams Criminals clone voices (e.g., Joe Rogan, parents, CEOs). Used in scams: “Mom, I’m kidnapped — send money!” Victims believe it because the voice sounds real. Shows: Deepfakes are now used in cybercrime. SLIDE 9 — Deepfakes in Politics Fake videos of politicians saying things they never said. Used to influence elections and public opinion. Hard to detect once they spread. Risk: Democracy can be manipulated. SLIDE 10 — Why Social Media Makes It Worse Algorithms push shocking content. Fake videos spread faster than real news. Many people don’t check sources. Verified accounts sometimes repost before verifying. SLIDE 11 — The Illusion of Truth Effect Key idea: “If we see something many times, we believe it.” Deepfakes abuse this psychological weakness by: Looking extremely real Being shared repeatedly Triggering strong emotions SLIDE 12 — The Dangers of Deepfakes Fake scandals can destroy reputations Misinformation can affect elections Financial markets can react to fake events AI voice scams are increasing 90% of deepfakes online = non-consensual pornography (MIT) SLIDE 13 — How to Detect Deepfakes Face & eyes: Strange blinking Unnatural movements Unmatched lip-sync Background: Blurry or “melting” details Impossible reflections or shadows Source verification: Is it from a trusted news outlet? Has it been confirmed by multiple sources? Tools: Reverse image search Fact-check websites SLIDE 14 — Solutions & Prevention Raise awareness (education, digital literacy) Governments work on deepfake laws Social media platforms detect & flag AI content AI watermarks (Google/Midjourney) Personal responsibility: verify before sharing SLIDE 15 — Conclusion Deepfakes create the perfect illusion of truth. We can no longer believe everything we see online. To stay safe, we must: Think critically Verify sources Question viral content Understand how AI manipulates our perception Final message: → Seeing is no longer believing. SLIDE 16 — References BBC News CNN Reuters The Guardian MIT Technology Review Forbes Washington Post
1
The Illusion of Truth in the Age of AI
SLIDE 2 — Introduction What is the “Illusion of Truth”? Our brain tends to believe something simply because it looks real. Repetition makes information feel true, even if it’s false. Why this matters today: Artificial Intelligence creates ultra-realistic fake videos, photos, and voices. It is harder than ever to know what is real. SLIDE 3 — What Are Deepfakes? Definition: Deepfakes = Fake images, videos, or voices created using Artificial Intelligence (deep learning). They can replace: Someone’s face Someone’s voice Entire events or scenes Examples: Fake Obama speech (Jordan Peele, 2018) Fake Tom Cruise TikTok videos SLIDE 4 — Why Deepfakes Fool the Brain Visual dominance We trust what we see more than what we read. Familiarity effect If the face looks familiar, we believe it. Repetition makes lies feel true Seeing a video many times → becomes “true” in our mind. Authority bias We trust information more when shared by a celebrity or verified account. SLIDE 5 — Real Incident #1: Fake Pentagon Explosion (2023) AI-generated image showing an “explosion” near the Pentagon. Shared by verified accounts → including Elon Musk. Stock market briefly dropped. Later confirmed FAKE by CNN, Reuters, and the U.S. Department of Defense. Why this matters: Even trusted people can spread fake AI content. SLIDE 6 — Real Incident #2: Pope Francis in Balenciaga Coat (2023) AI image of the Pope wearing a white designer jacket. Millions believed it. Shared globally before fact-checkers corrected it. Shows: Deepfakes can go viral even when harmless. SLIDE 7 — Real Incident #3: Fake Zelenskyy Surrender Speech (2022) Deepfake video showed President Zelenskyy telling soldiers to surrender. Posted during the war in Ukraine. Quickly debunked by BBC, Forbes, and several governments. Shows: Deepfakes can influence war, politics, and public opinion. SLIDE 8 — Real Incident #4: AI Voice Scams Criminals clone voices (e.g., Joe Rogan, parents, CEOs). Used in scams: “Mom, I’m kidnapped — send money!” Victims believe it because the voice sounds real. Shows: Deepfakes are now used in cybercrime. SLIDE 9 — Deepfakes in Politics Fake videos of politicians saying things they never said. Used to influence elections and public opinion. Hard to detect once they spread. Risk: Democracy can be manipulated. SLIDE 10 — Why Social Media Makes It Worse Algorithms push shocking content. Fake videos spread faster than real news. Many people don’t check sources. Verified accounts sometimes repost before verifying. SLIDE 11 — The Illusion of Truth Effect Key idea: “If we see something many times, we believe it.” Deepfakes abuse this psychological weakness by: Looking extremely real Being shared repeatedly Triggering strong emotions SLIDE 12 — The Dangers of Deepfakes Fake scandals can destroy reputations Misinformation can affect elections Financial markets can react to fake events AI voice scams are increasing 90% of deepfakes online = non-consensual pornography (MIT) SLIDE 13 — How to Detect Deepfakes Face & eyes: Strange blinking Unnatural movements Unmatched lip-sync Background: Blurry or “melting” details Impossible reflections or shadows Source verification: Is it from a trusted news outlet? Has it been confirmed by multiple sources? Tools: Reverse image search Fact-check websites SLIDE 14 — Solutions & Prevention Raise awareness (education, digital literacy) Governments work on deepfake laws Social media platforms detect & flag AI content AI watermarks (Google/Midjourney) Personal responsibility: verify before sharing SLIDE 15 — Conclusion Deepfakes create the perfect illusion of truth. We can no longer believe everything we see online. To stay safe, we must: Think critically Verify sources Question viral content Understand how AI manipulates our perception Final message: → Seeing is no longer believing. SLIDE 16 — References BBC News CNN Reuters The Guardian MIT Technology Review Forbes Washington Post
1
Systematic use of ceftriaxone in African hospitals?. why
AI mode Tous Short videos Images Videos News Web Books Maps Finance Undefined Undefined Undefined 16 sites Injectable ceftriaxone is widely used in African hospitals for both curative and preventive purposes due to its effectiveness, broad spectrum of action, and pharmacokinetic properties, despite growing concerns about antibiotic resistance. Why is ceftriaxone used so much? Curative use Ceftriaxone is an antibiotic from the third-generation cephalosporin family. It is particularly effective in the treatment of many severe bacterial infections. Broad spectrum of action: It acts against a wide variety of gram-positive and gram-negative bacteria, making it useful for treating serious infections, including when the pathogen has not yet been identified. Versatility: It is effective against a variety of infections including meningitis, septicemia (blood infection), severe pneumonia, intra-abdominal infections, osteoarticular infections, and genital infections such as gonorrhea. Single daily administration: Its prolonged effectiveness allows for a single daily injection, which simplifies treatment and reduces the workload for caregivers, especially in hospitals with limited resources. Security profile: It has a good security profile and is relatively affordable, making it a common choice in many low- and middle-income countries. Preventive use Perioperative prophylaxis: It is often used to prevent postoperative infections, especially for patients undergoing high-risk surgical procedures. This is crucial in environments where the risk of hospital-acquired (hospital-related) infections is high. Infections in immunocompromised patients: In some cases, it is used for prophylaxis in patients who are at increased risk of bacterial infections due to a weakened immune system. Disadvantages and issues raised Although ceftriaxone is very effective, its widespread use, including its inappropriate use, is causing serious problems in African hospitals: Antibiotic resistance: Overdiagnosis and the misuse of ceftriaxone are major factors in the development of bacterial resistance. Several studies in sub-Saharan Africa have reported a high rate of inappropriate ceftriaxone use, which is fuelling this phenomenon. Impact on bacterial flora: The abusive use of antibiotics, including ceftriaxone, can promote the proliferation of resistant bacteria and compromise the future effectiveness of treatment. Lack of surveillance: There is a lack of organized data on the appropriate use of antibiotics in sub-Saharan Africa. The lack of surveillance for antimicrobial resistance makes it difficult to develop relevant treatment guidelines. Pressure on resources: Although relatively affordable, its excessive use can weigh on already limited health budgets. Conclusion Ceftriaxone is an indispensable therapeutic tool in African hospitals because of its potency, its broad spectrum of action and its ease of use. However, frequent and sometimes inappropriate use for curative and preventive treatments promotes antibiotic resistance. To maintain its effectiveness, a more rational use is needed, based on antibiotic management programs and on the continuous updating of treatment protocols.
1
Load More