Recreating Cybercloud Safeguarding Today

Cyber Security Blog
Blog with us, and Navigate the Cyber Secrets with Confidence!

We are here for you, let us know what you think

21.8.25

Cyber ​​Threats to the Israeli Healthcare System - 2025 ๐Ÿšจ: What Every Manager Must Know

 Cyber ​​Threats 

๐Ÿšจ Healthcare System - 2025๐Ÿšจ

What Every Manager Must Know

Are you ready for the critical cyber challenges that await the Israeli healthcare system in 2025?

The Disturbing Reality: 24% Increase in Cyber ​​Incidents ๐Ÿ“ˆ

Fresh data from the Ministry of Public Security shows a 24% increase in reported cyber incidents in 2024, with the healthcare system at the forefront of the targets. This is not just another statistical report - this is a reality that directly affects the continuity of patient care.


7 Critical Cyber ​​Threats Threatening the Israeli Healthcare System:

๐ŸŽฏ 1. State Cyber ​​War

  • APT Groups Targeting the Israeli Healthcare System Specifically
  • Paralyzing Hospital Networks in Times of Crisis
  • Damaging Emergency Coordination Systems


๐Ÿ’ฐ 2. Advanced Digital Ransomware

  • Moving from Encryption to Data Theft + Extortion
  • 238 Ransomware Threats in Global Healthcare Systems in 2024
  • Prolonged Disruption to Patient Services


๐Ÿ”“ 3. Data Theft and Access Credentials

  • A Sharp Increase in Password Stealing Software
  • Breach of Health Insurance Systems
  • Damage to Sensitive Patient Information


๐Ÿ“ฑ 4. Attacks on Medical Devices

  • Exploiting Connected Medical Devices as an Entry Point
  • Vulnerabilities in Israeli PLC Systems (Unitronics)
  • Infiltration Through Devices with Weak Security


☁️ 5. Cloud Threats

Incorrect Configurations in Cloud EHR Systems

Weak Identity Controls

Exploiting Medical Information Sensitive


๐Ÿค– 6. Artificial Intelligence Threats

Using AI for sophisticated phishing attacks

Poisoning medical models

Forgery of doctor and patient identities


๐Ÿ”— 7. Supply chain threats

Overreliance on external suppliers

One breach = multiple system shutdown

Lack of control over suppliers


 The new obligations: Amendment 13 to the Privacy Protection Law ๐Ÿ“‹

Starting August 14, 2025 - every healthcare organization must appoint a Data Protection Officer (DPO). Are you ready?


๐ŸŽฏ Why is this report different?

✅ Intelligence-based analysis from relevant public websites

✅ Focus on Israel - specific threats to the geopolitical situation

✅ Practical recommendations for reducing risks

✅ Alignment with Ministry of Health requirements

✅ Lessons from the CrowdStrike incident that disabled dozens of hospitals in Israel

 

๐Ÿ’ก What will you find in the full report?

Detailed Threat Map for 2025

  • Defense Strategies Tailored to the Israeli Healthcare System
  • Intelligence Insights from the Field
  • Implementation Guide for Ministry of Health Requirements
  • Documented Source List for More Information

๐Ÿ“ฅ Download the Full Risk Report Now

"Healthcare Cyber ​​Threats Matter 2025 - Israel Focus"

By Nir Jonathan Passi - Cyber ​​Due Diligence


๐Ÿ“Š The full report includes:

  • In-depth analysis of each threat
  • Implementation recommendations specific to Israel
  • Practical tools for risk assessment
  • A guide to meeting regulatory requirements


⚡ In a world where one cyber threat can paralyze an entire hospital - knowledge is your best defense.


Don't wait for the next breach. Prepare now...

15.8.25

ืื™ื•ืžื™ ืกื™ื™ื‘ืจ ืขืœ ืชืขืฉื™ื™ืช ื”ื‘ืจื™ืื•ืช ื‘ื™ืฉืจืืœ - 2025: ืžื” ืฉื›ืœ ืžื ื”ืœ ื—ื™ื™ื‘ ืœื“ืขืช

ืื™ื•ืžื™ ื”ืกื™ื™ื‘ืจ ืขืœ ืžืขืจื›ืช ื”ื‘ืจื™ืื•ืช ื‘ื™ืฉืจืืœ - 2025๐Ÿšจ: ืžื” ืฉื›ืœ ืžื ื”ืœ ื—ื™ื™ื‘ ืœื“ืขืช

ื”ืื ืืชื ืžื•ื›ื ื™ื ืœืืชื’ืจื™ ื”ืกื™ื™ื‘ืจ ื”ืงืจื™ื˜ื™ื™ื ืฉืžื—ื›ื™ื ืœืžืขืจื›ืช ื”ื‘ืจื™ืื•ืช ื”ื™ืฉืจืืœื™ืช ื‘ืฉื ืช 2025?

ื”ืžืฆื™ืื•ืช ื”ืžื˜ืจื™ื“ื”: ืขืœื™ื™ื” ืฉืœ 24% ื‘ืื™ืจื•ืขื™ ืกื™ื™ื‘ืจ ๐Ÿ“ˆ

ื ืชื•ื ื™ื ื˜ืจื™ื™ื ืžื”ืžืฉืจื“ ืœื‘ื™ื˜ื—ื•ืŸ ื”ืคื ื™ื ืžืจืื™ื ืขืœื™ื™ื” ืฉืœ 24% ื‘ืื™ืจื•ืขื™ ืกื™ื™ื‘ืจ ืฉื“ื•ื•ื—ื• ื‘-2024, ื›ืืฉืจ ืžืขืจื›ืช ื”ื‘ืจื™ืื•ืช ื ืžืฆืืช ื‘ื—ื–ื™ืช ื”ืžื˜ืจื•ืช. ื–ื” ืœื ืขื•ื“ ื“ื•ื— ืกื˜ื˜ื™ืกื˜ื™ - ื–ื• ืžืฆื™ืื•ืช ืฉืžืฉืคื™ืขื” ื™ืฉื™ืจื•ืช ืขืœ ื”ืžืฉื›ื™ื•ืช ื”ื˜ื™ืคื•ืœ ื‘ื—ื•ืœื™ื.

7 ืื™ื•ืžื™ ื”ืกื™ื™ื‘ืจ ื”ืงืจื™ื˜ื™ื™ื ืฉืžืื™ื™ืžื™ื ืขืœ ืžืขืจื›ืช ื”ื‘ืจื™ืื•ืช ื”ื™ืฉืจืืœื™ืช:

๐ŸŽฏ 1. ืžืœื—ืžืช ืกื™ื™ื‘ืจ ืžืžืœื›ืชื™ืช

  • ืงื‘ื•ืฆื•ืช APT ืžื›ื•ื•ื ื•ืช ืกืคืฆื™ืคื™ืช ืœืžืขืจื›ืช ื”ื‘ืจื™ืื•ืช ื”ื™ืฉืจืืœื™ืช
  • ืฉื™ืชื•ืง ืจืฉืชื•ืช ื‘ื™ืช ื—ื•ืœื™ื ื‘ื–ืžื ื™ ืžืฉื‘ืจ
  • ืคื’ื™ืขื” ื‘ืžืขืจื›ื•ืช ืชื™ืื•ื ื—ื™ืจื•ื

๐Ÿ’ฐ 2. ื›ื•ืคืจื•ืช ื“ื™ื’ื™ื˜ืœื™ื•ืช ืžืชืงื“ืžื•ืช

  • ืžืขื‘ืจ ืžื”ืฆืคื ื” ืœื’ื ื™ื‘ืช ืžื™ื“ืข + ืกื—ื™ื˜ื”
  • 238 ืื™ื•ืžื™ ื›ื•ืคืจื” ื‘ืžืขืจื›ื•ืช ื‘ืจื™ืื•ืช ื’ืœื•ื‘ืœื™ื•ืช ื‘-2024
  • ื”ืคืจืขื” ืžืžื•ืฉื›ืช ืœืฉื™ืจื•ืชื™ ื—ื•ืœื™ื

๐Ÿ”“ 3. ื’ื ื™ื‘ืช ืžื™ื“ืข ื•ืื™ืฉื•ืจื™ ื’ื™ืฉื”

  • ืขืœื™ื™ื” ื—ื“ื” ื‘ืชื•ื›ื ื•ืช ืœื’ื ื™ื‘ืช ืกื™ืกืžืื•ืช
  • ืคืจื™ืฆื” ืœืžืขืจื›ื•ืช ื‘ื™ื˜ื•ื— ื‘ืจื™ืื•ืช
  • ืคื’ื™ืขื” ื‘ืžื™ื“ืข ืจื’ื™ืฉ ืฉืœ ืžื˜ื•ืคืœื™ื

๐Ÿ“ฑ 4. ื”ืชืงืคื•ืช ืขืœ ืžื›ืฉื™ืจื™ื ืจืคื•ืื™ื™ื

  • ื ื™ืฆื•ืœ ืžื›ืฉื™ืจื™ื ืจืคื•ืื™ื™ื ืžื—ื•ื‘ืจื™ื ื›ื ืงื•ื“ืช ื›ื ื™ืกื”
  • ืคื’ื™ืขื•ืช ื‘ืžืขืจื›ื•ืช PLC ื™ืฉืจืืœื™ื•ืช (Unitronics)
  • ื—ื“ื™ืจื” ื“ืจืš ืžื›ืฉื™ืจื™ื ืขื ืื‘ื˜ื—ื” ื—ืœืฉื”

☁️ 5. ืื™ื•ืžื™ ืขื ืŸ

  • ืชืฆื•ืจื•ืช ืฉื’ื•ื™ื•ืช ื‘ืžืขืจื›ื•ืช EHR ืขื ื ื™ื•ืช
  • ื‘ืงืจื•ืช ื–ื”ื•ืช ื—ืœืฉื•ืช
  • ื—ืฉื™ืคืช ืžื™ื“ืข ืจืคื•ืื™ ืจื’ื™ืฉ

๐Ÿค– 6. ืื™ื•ืžื™ ื‘ื™ื ื” ืžืœืื›ื•ืชื™ืช

  • ืฉื™ืžื•ืฉ ื‘AI ืœื”ืชืงืคื•ืช ืคื™ืฉื™ื ื’ ืžืชื•ื—ื›ืžื•ืช
  • ื”ืจืขืœืช ืžื•ื“ืœื™ื ืจืคื•ืื™ื™ื
  • ื–ื™ื•ืฃ ื–ื”ื•ืช ืฉืœ ืจื•ืคืื™ื ื•ืžื˜ื•ืคืœื™ื

๐Ÿ”— 7. ืื™ื•ืžื™ ืฉืจืฉืจืช ืืกืคืงื”

  • ืชืœื•ืช ื™ืชืจ ื‘ืกืคืงื™ื ื—ื™ืฆื•ื ื™ื™ื
  • ืคืจื™ืฆื” ืื—ืช = ืฉื™ืชื•ืง ืžืขืจื›ื•ืช ืžืจื•ื‘ื•ืช
  • ื—ื•ืกืจ ื‘ืงืจื” ืขืœ ืกืคืงื™ื

๐Ÿ“‹ ื”ื—ื•ื‘ื•ืช ื”ื—ื“ืฉื•ืช: ืชื™ืงื•ืŸ 13 ืœื—ื•ืง ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช

ื”ื—ืœ ืž-14 ื‘ืื•ื’ื•ืกื˜ 2025 - ื›ืœ ืืจื’ื•ืŸ ื‘ืจื™ืื•ืช ื—ื™ื™ื‘ ืœืžื ื•ืช ืงืฆื™ืŸ ื”ื’ื ืช ืžื™ื“ืข (DPO). ื”ืื ืืชื ืžื•ื›ื ื™ื?

๐ŸŽฏ ืœืžื” ื”ื“ื•ื— ื”ื–ื” ืฉื•ื ื”?

ื ื™ืชื•ื— ืžื‘ื•ืกืก ืžื•ื“ื™ืขื™ืŸ ืžืืชืจื™ื ืฆื™ื‘ื•ืจื™ื™ื ืจืœื•ื•ื ื˜ื™ื™ื
✅ ื”ืชืžืงื“ื•ืช ื‘ื™ืฉืจืืœ - ืื™ื•ืžื™ื ืกืคืฆื™ืคื™ื™ื ืœืžืฆื‘ ื”ื’ื™ืื•-ืคื•ืœื™ื˜ื™
✅ ื”ืžืœืฆื•ืช ืžืขืฉื™ื•ืช ืœื”ืงื˜ื ืช ืกื™ื›ื•ื ื™ื
✅ ื™ื™ืฉื•ืจ ืขื ื“ืจื™ืฉื•ืช ืžืฉืจื“ ื”ื‘ืจื™ืื•ืช
✅ ืœืงื—ื™ื ืžืื™ืจื•ืข CrowdStrike ืฉื”ืฉื‘ื™ืช ืขืฉืจื•ืช ื‘ืชื™ ื—ื•ืœื™ื ื‘ื™ืฉืจืืœ

๐Ÿ’ก ืžื” ืชืžืฆืื• ื‘ื“ื•ื— ื”ืžืœื?

  • ืžืคืช ืื™ื•ืžื™ื ืžืคื•ืจื˜ืช ืœืฉื ืช 2025
  • ืืกื˜ืจื˜ื’ื™ื•ืช ื”ื’ื ื” ืžื•ืชืืžื•ืช ืœืžืขืจื›ืช ื”ื‘ืจื™ืื•ืช ื”ื™ืฉืจืืœื™ืช
  • ืชื•ื‘ื ื•ืช ืžื•ื“ื™ืขื™ื ื™ื•ืช ืžื”ืฉื˜ื—
  • ืžื“ืจื™ืš ื™ื™ืฉื•ื ืœื“ืจื™ืฉื•ืช ืžืฉืจื“ ื”ื‘ืจื™ืื•ืช
  • ืจืฉื™ืžืช ืžืงื•ืจื•ืช ืžืชื•ืขื“ืช ืœืžื™ื“ืข ื ื•ืกืฃ

๐Ÿ“ฅ ื”ื•ืจื™ื“ื• ืืช ื”ื“ื•ื— ื”ืžืœื ืขื›ืฉื™ื•

"Healthcare Cyber Threats Matter 2025 - Israel Focus"
ืžืืช ื ื™ืจ ื™ื”ื•ื ืชืŸ ืคืกื™ - Cyber Due Diligence

๐Ÿ“Š ื”ื“ื•ื— ื”ืžืœื ื›ื•ืœืœ:

  • ื ื™ืชื•ื— ืžืขืžื™ืง ืฉืœ ื›ืœ ืื™ื•ื
  • ื”ืžืœืฆื•ืช ื™ื™ืฉื•ื ืกืคืฆื™ืคื™ื•ืช ืœื™ืฉืจืืœ
  • ื›ืœื™ื ืžืขืฉื™ื™ื ืœื”ืขืจื›ืช ืกื™ื›ื•ื ื™ื
  • ืžื“ืจื™ืš ืœืขืžื™ื“ื” ื‘ื“ืจื™ืฉื•ืช ืจื’ื•ืœื˜ื•ืจื™ื•ืช

⚡ ื‘ืขื•ืœื ืฉื‘ื• ืื™ื•ื ืกื™ื™ื‘ืจ ืื—ื“ ื™ื›ื•ืœ ืœืฉืชืง ื‘ื™ืช ื—ื•ืœื™ื ืฉืœื - ื”ื™ื“ืข ื”ื•ื ื”ื”ื’ื ื” ื”ื˜ื•ื‘ื” ื‘ื™ื•ืชืจ ืฉืœื›ื.

ืืœ ืชื—ื›ื• ืœืคืจื™ืฆื” ื”ื‘ืื”. ื”ืชื›ื•ื ื ื• ืขื›ืฉื™ื•...


How to build a cyber security controls methodology

 How do you build a ๐Ÿ“Œ security/controls methodology๐Ÿ” that works for any organization?

After years of working with complex cyber risk management methodologies, I decided to think outside the box ๐Ÿ’ก and build something simpler - but no less effective.


๐Ÿ“ˆ When I need to adapt a methodology to an organization, I usually start with a risk management framework (such as CIAAN), and instead of approaching it only through traditional risk management, I create a threat map and build an appropriate control structure. Here I developed a methodology based on the 12 Pillar’s, which I developed for the information protection architects of a large healthcare organization, and as I did, each organization can adapt to its unique needs. ๐Ÿ‘๐Ÿป


Why 12? ๐ŸŽ“

The number 12 carries a meaning of completeness and order in many cultures ๐Ÿ––๐Ÿป - 12 tribes, 12 messengers, 12 months ๐Ÿ“…, 12 hours ๐Ÿ•“, 12 zodiac signs ๐Ÿน. It represents a foundation for stability and integrity, exactly what we are looking for in information security. ๐Ÿ““


The 12 key pillars for reducing cyber risks:


๐Ÿ“ Authentication - Identifying and validating user identities

๐Ÿ“ Authorization - Defining permissions and approaches

๐Ÿ“ Encryption - Protecting information at rest and in motion

๐Ÿ“ Network Security - Protecting the communication infrastructure

๐Ÿ“ Endpoint Security - Secure devices and connections

๐Ÿ“ API Security - Protecting software interfaces

๐Ÿ“ SSDLC and container security - Security at the development level

๐Ÿ“ Vulnerability Management - Identifying and addressing weaknesses

๐Ÿ“ Supply Chain and Third-Party Controls - Protecting against suppliers

๐Ÿ“ Auditing and Compliance - Compliance with standards and regulations

๐Ÿ“ Incident Response - Preparedness to handle security incidents

๐Ÿ“ Disaster Recovery and BCP - Business Continuity


The advantage of this methodology:

✅ Simplicity - Easy to implement and understand

✅ Flexibility - Adaptable to any organization

✅ Comprehensive Coverage - Covers all aspects of security

✅ Practicality - Focuses on applicable controls

This methodology helps organizations build a customized security strategy without getting into the unnecessary tangle of complex frameworks.


๐Ÿค๐Ÿป What do you think of this approach? ๐Ÿคท๐Ÿป How do you build the security methodology in your organization?


For a detailed and effective reading, go to the document: https://lnkd.in/dE-Bbkiv 

13.7.25

DPO Ready: ื”ื›ื ื” ืžืœืื” ืœืชืคืงื™ื“ DPO ื‘ืขืงื‘ื•ืช ืชื™ืงื•ืŸ 13 ืœื—ื•ืง ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช

ื—ืฉื™ื‘ื•ืช ืžืžื•ื ื” ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช - ืชื™ืงื•ืŸ 13

๐Ÿ›ก️ ื—ืฉื™ื‘ื•ืช ืžืžื•ื ื” ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช (DPO)

⏰ ื—ื•ื‘ืช ืžื™ื ื•ื™ ืžืžื•ื ื” ื”ื’ื ' ื”ืคืจื˜ื™ื•ืช ืž- 15 ื‘ืื•ื’ื•ืกื˜ 2024 - ืชื™ืงื•ืŸ 13

๐Ÿ“‹ ืžื”ื• ืชื™ืงื•ืŸ 13 ืœื—ื•ืง ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช?

ืชื™ืงื•ืŸ 13 ืœื—ื•ืง ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช ืžื—ื™ื™ื‘ ืืจื’ื•ื ื™ื ืœืžื ื•ืช ืžืžื•ื ื” ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช (DPO) ืขืœ ืคื™ ืกืขื™ืฃ 17ื‘. ื”ืžืžื•ื ื” ื™ืฉืžืฉ ื›ืื™ืฉ ืงืฉืจ ืžืจื›ื–ื™ ื‘ื™ืŸ ื”ืืจื’ื•ืŸ ืœืจืฉื•ืช ืœื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช, ื•ื™ื”ื™ื” ืื—ืจืื™ ืขืœ ื™ื™ืฉื•ื ืžื“ื™ื ื™ื•ืช ื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช ื•ืขืœ ื—ื™ื–ื•ืง ืžื ื’ื ื•ื ื™ ื”ืคื™ืงื•ื— ื”ืคื ื™ืžื™ื™ื.

๐Ÿ’ผ ืชื—ื•ืžื™ ืื—ืจื™ื•ืช ื”ืžืžื•ื ื”

๐Ÿ‘จ‍๐Ÿ’ผ
ื™ื™ืขื•ืฅ ืฉื•ื˜ืฃ ืœื”ื ื”ืœื”
ื”ื“ืจื›ื” ื•ื™ื™ืขื•ืฅ ืžืชืžืฉืš ืœืฆื•ื•ืชื™ ื”ื”ื ื”ืœื” ื‘ื ื•ืฉืื™ ื”ื’ื ืช ืคืจื˜ื™ื•ืช
๐Ÿ”
ืžื™ืคื•ื™ ืกื™ื›ื•ื ื™ื
ื–ื™ื”ื•ื™ ื•ื ื™ืชื•ื— ืกื™ื›ื•ื ื™ื ืคื•ื˜ื ืฆื™ืืœื™ื™ื ืœืคืจื˜ื™ื•ืช ื‘ืžืขืจื›ื•ืช ื”ืืจื’ื•ืŸ
๐Ÿ“Š
ื”ืขืจื›ื•ืช ื”ืฉืคืขื” ืขืœ ืคืจื˜ื™ื•ืช
ื‘ื™ืฆื•ืข DPIA (Data Protection Impact Assessment) ืœืคืจื•ื™ืงื˜ื™ื ื—ื“ืฉื™ื
๐Ÿ›ก️
ืคื™ืชื•ื— ื ื”ืœื™ ืื‘ื˜ื—ืช ืžื™ื“ืข
ื™ืฆื™ืจืช ื•ื™ื™ืฉื•ื ื ื”ืœื™ื ืžืชืงื“ืžื™ื ืœื”ื’ื ื” ืขืœ ืžื™ื“ืข ืื™ืฉื™
⚙️
Privacy by Design
ื”ื˜ืžืขืช ืขืงืจื•ื ื•ืช Privacy by Design ื•-Privacy by Default
๐Ÿค
ืงืฉืจ ืขื ื”ืจืฉื•ืช
ืชื™ื•ื•ืš ื‘ื™ืŸ ื”ืืจื’ื•ืŸ ืœืจืฉื•ืช ืœื”ื’ื ืช ื”ืคืจื˜ื™ื•ืช

๐ŸŽฏ ืœืžื™ ืžื™ื•ืขื“ ื”ืžืกืžืš?

ื”ื ื”ืœื” ื‘ื›ื™ืจื”ืžื ื”ืœื™ ืกื™ื›ื•ื ื™ืืฆื•ื•ืชื™ IT/OTื’ื•ืจืžื™ ืคืจื˜ื™ื•ืช ื‘ืืจื’ื•ืŸืžื•ืขืžื“ื™ื ืœืชืคืงื™ื“ DPO

๐ŸŽฏ ื™ืชืจื•ื ื•ืช ื”ื˜ืžืขืช ื”ืžืกืžืš

๐Ÿ’ฐ
ื—ื™ืกื›ื•ืŸ ื–ืžืŸ ื•ื›ืกืฃ
๐Ÿ“ˆ
ืฉื™ืคื•ืจ ื”ืžื•ื ื™ื˜ื™ืŸ
⚖️
ื”ืคื—ืชืช ืกื™ื›ื•ื ื™ ืื™-ืฆื™ื•ืช
๐Ÿ˜Š
ืฉื‘ื™ืขื•ืช ืจืฆื•ืŸ ืœืงื•ื—ื•ืช

๐Ÿ“ข ื”ืคืฆืช ื”ืžืกืžืš - ื—ื™ื•ื ื™ืช ืœื”ืฆืœื—ื”!

ื”ืคืฆืช ื”ืžืกืžืš ื‘ืงืจื‘ ื›ืœืœ ื”ืžื—ืœืงื•ืช ื•ื”ืชื”ืœื™ื›ื™ื ื”ืืจื’ื•ื ื™ื™ื ืชืืคืฉืจ ื™ื™ืฉื•ื ืื—ื™ื“ ืฉืœ ื—ื•ื‘ื•ืช ื”ืžื™ื ื•ื™ ื•ื”ืžืฉื™ืžื•ืช ื”ื—ื“ืฉื•ืช ืฉืœ ื”ืžืžื•ื ื”. ื”ื“ืจื›ื” ืžืงื™ืคื” ืชื‘ื˜ื™ื— ื”ื˜ืžืขืช ื ื”ืœื™ ืขื‘ื•ื“ื” ืชืคืขื•ืœื™ื™ื ืชื•ืืžื™ ืจื’ื•ืœืฆื™ื” ื•ืชื—ื–ืง ืืช ืืžื•ืŸ ื‘ืขืœื™ ื”ืขื ื™ื™ืŸ.

๐Ÿš€ ื”ืชื›ื•ื ื ื• ืœืขืชื™ื“ ืžื•ื’ืŸ ื™ื•ืชืจ - ื”ืชื—ื™ืœื• ืขื›ืฉื™ื•!

17.12.23

The risks of AI are real, what risks does AI possess

 AI Risks can be mitigated by an international code of ethics - but there will always be an exception 

Colleague's discussion about the late risks coming out of AI


Yes, I answered a friend from work. I am familiar with the Netflix docuseries "Unknown" and its episode on AI, titled "Unknown: Killer Robots." It was released in July 2023 as part of a four-week event exploring various mysteries around the world.


The episode delves into the development of military AI-powered robots and the ethical concerns surrounding them. It follows soldiers and scientists involved in creating these "killer robots" while also featuring activists raising awareness about their potential dangers.


The ethical issues surrounding AI in warfare, biology, health, and governance have been familiar to me since 2011 (Killing Robots 2011). You can read my post from that time. I’ll be okay with sharing my knowledge, and thoughts or even engaging in a conversation with you.


In that way we started a long conversation, let me share with you some parts.


cyberdog AI ethics- secprof


Ethics


The ethical concerns surrounding AI and its potential use in warfare are complex and multifaceted. Here are some points we have decided that anyone must consider, by this specific order:

1. Autonomous Weapon Systems (AWS) - The use of AI-powered robots capable of making their own decisions about targeting and engagement is a major ethical concern. The episode "Unknown: Killer Robots" highlights this issue, showing the potential for unintended consequences and loss of human control. International debate and treaties are currently underway to regulate or even ban AWS, but progress is slow.

2. Bias and Discrimination - AI algorithms can perpetuate existing biases and lead to discriminatory targeting or profiling. This is particularly concerning in the context of war, where decisions about life and death are being made. Ensuring fairness and accountability in AI development and deployment is crucial.

3. Accountability and Responsibility - When AI systems make mistakes, who is responsible? This question becomes even more complex when dealing with autonomous weapons. Holding developers, commanders, or even the AI itself accountable remains a significant challenge.

4. Dehumanization and Escalation - The use of AI robots in warfare could further dehumanize conflict and lead to increased violence and escalation. The psychological and ethical implications of relying on machines to kill require careful consideration.

5. Politicization and Manipulation - The potential for AI to be used for political ends, such as targeting specific individuals or manipulating public opinion, is another serious concern. Safeguards and oversight mechanisms are needed to ensure responsible use of AI technology.

Regarding The question that came up in the conversation, will we soon see armies purchasing AI robots not for rescue or intelligence gathering but for killing with autonomous intelligence to use in war or to kill politicians for a coup?

  • Timeline for first use - It is difficult to predict with certainty when AI robots will be used in warfare. However, it is likely that their use will be gradual and incremental, starting with tasks like surveillance and logistics before moving to more complex and controversial roles like targeting and engagement. The episode "Unknown: Killer Robots" depicts a potential future scenario where AI robots are already in use, but this is not necessarily a realistic timeline.

  • Use by /against politicians - While the use of AI robots against specific individuals is technically possible, it is unlikely to be the first or most common application. The military would likely prioritize using AI for tasks that are deemed strategically beneficial, such as targeting enemy forces or infrastructure. However, the potential for misuse and abuse cannot be entirely discounted.

  • AI is a powerful tool - that can be used or misused for evil acts. As we develop and deploy AI technologies, we must carefully consider the ethical implications and ensure that they are used responsibly and humanely. Open and informed public discussion is essential to shaping the future of AI and preventing its misuse.

By focusing on the ethical considerations, monitoring the potential risks, and debate on it, we will create awareness shortly where AI is used to benefit humanity, and not to harm it. Let’s learn or debate a bit more.

 liability and accountability

You raise some excellent and concerning points my friend said, about the potential misuse of AI, particularly in the context of warfare. The issues of liability, accountability, and responsibility are indeed crucial and complex, but not has a  distant threat, But more near one…

So how can we approach Liability and Accountability, and who will be responsible:

  • States - In traditional warfare, governments and militaries are held accountable for their actions under international law and human rights conventions. However, with AI-powered weapons, the lines become blurred. Who is responsible if an autonomous drone makes a targeting mistake? The programmer, the commander who deployed it, or the AI itself? Without clear legal frameworks, attributing blame and seeking justice could be incredibly challenging.

  • Non-state - The rise of rogue actors and non-state groups capable of developing or acquiring AI weapons further complicates the issue. How do we hold them accountable if they operate outside the traditional legal system? This raises the specter of an unregulated arms race, with devastating consequences.

  • Corporates and organizations - Private companies developing AI technologies for military use also raise concerns. Should they be held liable for the misuse of their products? Can they be incentivized to develop and deploy AI responsibly? Striking a balance between innovation and ethical considerations is crucial.

Responsibility

Ultimately, the responsibility for preventing the misuse of AI lies with all of us. Governments must establish robust regulations and oversight mechanisms. Researchers and developers must prioritize ethical principles in their work. And individuals must remain informed and engaged in the debate about AI's role in society.

The urgency of the situation:

While predicting the exact timeline of AI misuse is impossible, several factors suggest it could be sooner rather than later, and they are:

  • Rapid advancements in AI technology, The pace of development in AI is astonishing. Capabilities that seemed futuristic just a few years ago are now within reach.

  • Accessibility of AI tools, The tools and knowledge needed to develop basic AI are becoming increasingly accessible, even for non-state actors.

  • The lack of international consensus, Despite ongoing discussions, there is no international consensus on regulating or banning autonomous weapons. This creates a dangerous vacuum that could be exploited.

Therefore, it is crucial to act with urgency and implement robust safeguards to prevent the misuse of AI before it's too late. We must not wait for a tragedy to occur before taking action.

Resources for further information and action:

cyborg AI ethics- secprof

Safeguarding AI misuse


The current landscape surrounding AI ethics is complex and somewhat daunting. While passionate groups are working against AI misuse, they face significant challenges against the rapid advancements and powerful interests driving AI development. and also they get the label of losers!


The EU's AI Act is a major step in the right direction, but it's not a silver bullet.


Okay let's try to  break-down this situation:


Positive signs


  • Growing public awareness, more people are becoming aware of the potential risks of AI and demanding responsible development. This is crucial for putting pressure on governments and corporations to act.


  • International efforts, the EU's AI Act is not alone. Other countries and organizations are also developing regulations and ethical frameworks for AI. This shows a growing international consensus on the need for action.


  • Technological advancements in safety, researchers are developing AI safety tools and techniques, such as explainable AI and adversarial training, to mitigate risks and prevent misuse.


Challenges remain


  1. Lack of global consensus, different countries have different priorities and approaches to AI regulation. This lack of unity creates loopholes and makes it harder to enforce standards.

  2. Powerful vested interests, companies and governments with significant investments in AI may resist regulations that hinder their profits or technological ambitions.

  3. Rapid technological advancements, AI is developing quickly, making it difficult for regulations to keep pace and address the latest threats.

  4. The complexity of AI, the AI systems are often complex and opaque, making it challenging to identify and address potential biases or vulnerabilities.


So, is the EU’s AI Act enough? 

No, but it's a significant step forward. 

What we can also do is: 


Support AI ethics organizations - Donate our time or resources to groups like the Future of Life Institute and the International Campaign to Stop Killer Robots.


Hold corporations accountable -  Demand transparency and ethical practices from our governments to regulate it, over companies developing AI technologies, in every field.


Educate ourselves and others - analyze and develop AI ethics create a road map for the potential risks, and share your knowledge with others.


Advocate for responsible AI policies - Contact our elected politicians and officials and urge them to support legislation that promotes ethical AI development.


We need to remember, that preventing an AI apocalypse won't be a one-time effort. It will require sustained pressure from individuals, organizations, and governments. By working together, we can ensure that AI is used for the good of humanity and not for scamming, gaining more power, and creating an era of apocalypse.


The future of AI is not predetermined yet. We have the power to shape it and make sure it benefits all of humanity.


Cyborg_health AI bot - secprof


What can we do for the benefit of health with AI?

Developing a code of ethics for AI in healthcare is crucial to ensure its responsible and beneficial use. Here are some suggestions for your field:

General Principles

  • Patient autonomy and informed consent, and Patients must be informed about the use of AI in their diagnosis and treatment, and have the right to refuse or opt out.

  • Beneficence and non-maleficence, the AI tools should be used to improve patient outcomes and avoid unnecessary harm.

  • Transparency and exploitability, the AI decision-making processes should be transparent and understandable to healthcare professionals and patients alike.

  • Fairness and non-discrimination, the AI algorithms must be designed and trained to avoid bias and discrimination based on race, gender, socioeconomic status, or other factors.

  • Privacy and security, the patient data used in AI development and deployment must be protected with robust safeguards.

  • Accountability and responsibility, Developers, healthcare providers, and institutions must be accountable for the use and outcomes of AI in healthcare.

Specific to Medical AI Tools

  • Clear role definition, define the intended role of the AI tool (diagnostic aid, decision support, etc.),the and ensure it does not replace human judgment and expertise.

  • Validation and testing, the AI tools must be rigorously tested and validated in clinical settings to ensure their accuracy, safety, and efficacy.

  • Human oversight and control, human healthcare professionals should always have the final say in any decision made with the help of AI.

  • Continuous monitoring and improvement AI models should be continuously monitored for bias, errors, and potential harm, and updated as needed.

  • Education and training Healthcare professionals need to be educated on the use of AI tools, their limitations, and how to interpret their outputs.

Adoption and Implementation

  • Involving stakeholders in a process of developing the code of ethics, including doctors, nurses, patients, ethicists, and AI developers.

  • Clear communication and education ensure all stakeholders understand the code of ethics and its implications for their work.

  • Incentivize compliance and implement mechanisms to promote and reward ethical use of AI in healthcare.

  • Regular review and updates, regularly review and update the code of ethics to reflect evolving technologies and practices.

Remember, a code of ethics is just a framework

The success depends on its implementation, enforcement, and adaptation to evolving technologies and contexts. Working together and continuously improving, may ensure that AI in healthcare benefits all patients and contributes to a more ethical and equitable healthcare system.

Additional Resources:

The end of a colleague discussion

Here we finished our discussion, and I started to write this article/post. From here it is in the hands of any reader. I gave you the knowledge, if it is something you share in your thoughts, help us to make a change and share this post with other friends, today!