Recreating Cybercloud Safeguarding Today


Blog with us, and Navigate the Cyber Jungle with Confidence!

We are here for you, let us know what you think

17.12.23

The risks of AI are real, what risks does AI possess

 AI Risks can be mitigated by an international code of ethics - but there will always be an exception 

Colleague's discussion about the late risks coming out of AI


Yes, I answered a friend from work. I am familiar with the Netflix docuseries "Unknown" and its episode on AI, titled "Unknown: Killer Robots." It was released in July 2023 as part of a four-week event exploring various mysteries around the world.


The episode delves into the development of military AI-powered robots and the ethical concerns surrounding them. It follows soldiers and scientists involved in creating these "killer robots" while also featuring activists raising awareness about their potential dangers.


The ethical issues surrounding AI in warfare, biology, health, and governance have been familiar to me since 2011 (Killing Robots 2011). You can read my post from that time. I’ll be okay with sharing my knowledge, and thoughts or even engaging in a conversation with you.


In that way we started a long conversation, let me share with you some parts.


cyberdog AI ethics- secprof


Ethics


The ethical concerns surrounding AI and its potential use in warfare are complex and multifaceted. Here are some points we have decided that anyone must consider, by this specific order:

1. Autonomous Weapon Systems (AWS) - The use of AI-powered robots capable of making their own decisions about targeting and engagement is a major ethical concern. The episode "Unknown: Killer Robots" highlights this issue, showing the potential for unintended consequences and loss of human control. International debate and treaties are currently underway to regulate or even ban AWS, but progress is slow.

2. Bias and Discrimination - AI algorithms can perpetuate existing biases and lead to discriminatory targeting or profiling. This is particularly concerning in the context of war, where decisions about life and death are being made. Ensuring fairness and accountability in AI development and deployment is crucial.

3. Accountability and Responsibility - When AI systems make mistakes, who is responsible? This question becomes even more complex when dealing with autonomous weapons. Holding developers, commanders, or even the AI itself accountable remains a significant challenge.

4. Dehumanization and Escalation - The use of AI robots in warfare could further dehumanize conflict and lead to increased violence and escalation. The psychological and ethical implications of relying on machines to kill require careful consideration.

5. Politicization and Manipulation - The potential for AI to be used for political ends, such as targeting specific individuals or manipulating public opinion, is another serious concern. Safeguards and oversight mechanisms are needed to ensure responsible use of AI technology.

Regarding The question that came up in the conversation, will we soon see armies purchasing AI robots not for rescue or intelligence gathering but for killing with autonomous intelligence to use in war or to kill politicians for a coup?

  • Timeline for first use - It is difficult to predict with certainty when AI robots will be used in warfare. However, it is likely that their use will be gradual and incremental, starting with tasks like surveillance and logistics before moving to more complex and controversial roles like targeting and engagement. The episode "Unknown: Killer Robots" depicts a potential future scenario where AI robots are already in use, but this is not necessarily a realistic timeline.

  • Use by /against politicians - While the use of AI robots against specific individuals is technically possible, it is unlikely to be the first or most common application. The military would likely prioritize using AI for tasks that are deemed strategically beneficial, such as targeting enemy forces or infrastructure. However, the potential for misuse and abuse cannot be entirely discounted.

  • AI is a powerful tool - that can be used or misused for evil acts. As we develop and deploy AI technologies, we must carefully consider the ethical implications and ensure that they are used responsibly and humanely. Open and informed public discussion is essential to shaping the future of AI and preventing its misuse.

By focusing on the ethical considerations, monitoring the potential risks, and debate on it, we will create awareness shortly where AI is used to benefit humanity, and not to harm it. Let’s learn or debate a bit more.

 liability and accountability

You raise some excellent and concerning points my friend said, about the potential misuse of AI, particularly in the context of warfare. The issues of liability, accountability, and responsibility are indeed crucial and complex, but not has a  distant threat, But more near one…

So how can we approach Liability and Accountability, and who will be responsible:

  • States - In traditional warfare, governments and militaries are held accountable for their actions under international law and human rights conventions. However, with AI-powered weapons, the lines become blurred. Who is responsible if an autonomous drone makes a targeting mistake? The programmer, the commander who deployed it, or the AI itself? Without clear legal frameworks, attributing blame and seeking justice could be incredibly challenging.

  • Non-state - The rise of rogue actors and non-state groups capable of developing or acquiring AI weapons further complicates the issue. How do we hold them accountable if they operate outside the traditional legal system? This raises the specter of an unregulated arms race, with devastating consequences.

  • Corporates and organizations - Private companies developing AI technologies for military use also raise concerns. Should they be held liable for the misuse of their products? Can they be incentivized to develop and deploy AI responsibly? Striking a balance between innovation and ethical considerations is crucial.

Responsibility

Ultimately, the responsibility for preventing the misuse of AI lies with all of us. Governments must establish robust regulations and oversight mechanisms. Researchers and developers must prioritize ethical principles in their work. And individuals must remain informed and engaged in the debate about AI's role in society.

The urgency of the situation:

While predicting the exact timeline of AI misuse is impossible, several factors suggest it could be sooner rather than later, and they are:

  • Rapid advancements in AI technology, The pace of development in AI is astonishing. Capabilities that seemed futuristic just a few years ago are now within reach.

  • Accessibility of AI tools, The tools and knowledge needed to develop basic AI are becoming increasingly accessible, even for non-state actors.

  • The lack of international consensus, Despite ongoing discussions, there is no international consensus on regulating or banning autonomous weapons. This creates a dangerous vacuum that could be exploited.

Therefore, it is crucial to act with urgency and implement robust safeguards to prevent the misuse of AI before it's too late. We must not wait for a tragedy to occur before taking action.

Resources for further information and action:

cyborg AI ethics- secprof

Safeguarding AI misuse


The current landscape surrounding AI ethics is complex and somewhat daunting. While passionate groups are working against AI misuse, they face significant challenges against the rapid advancements and powerful interests driving AI development. and also they get the label of losers!


The EU's AI Act is a major step in the right direction, but it's not a silver bullet.


Okay let's try to  break-down this situation:


Positive signs


  • Growing public awareness, more people are becoming aware of the potential risks of AI and demanding responsible development. This is crucial for putting pressure on governments and corporations to act.


  • International efforts, the EU's AI Act is not alone. Other countries and organizations are also developing regulations and ethical frameworks for AI. This shows a growing international consensus on the need for action.


  • Technological advancements in safety, researchers are developing AI safety tools and techniques, such as explainable AI and adversarial training, to mitigate risks and prevent misuse.


Challenges remain


  1. Lack of global consensus, different countries have different priorities and approaches to AI regulation. This lack of unity creates loopholes and makes it harder to enforce standards.

  2. Powerful vested interests, companies and governments with significant investments in AI may resist regulations that hinder their profits or technological ambitions.

  3. Rapid technological advancements, AI is developing quickly, making it difficult for regulations to keep pace and address the latest threats.

  4. The complexity of AI, the AI systems are often complex and opaque, making it challenging to identify and address potential biases or vulnerabilities.


So, is the EU’s AI Act enough? 

No, but it's a significant step forward. 

What we can also do is: 


Support AI ethics organizations - Donate our time or resources to groups like the Future of Life Institute and the International Campaign to Stop Killer Robots.


Hold corporations accountable -  Demand transparency and ethical practices from our governments to regulate it, over companies developing AI technologies, in every field.


Educate ourselves and others - analyze and develop AI ethics create a road map for the potential risks, and share your knowledge with others.


Advocate for responsible AI policies - Contact our elected politicians and officials and urge them to support legislation that promotes ethical AI development.


We need to remember, that preventing an AI apocalypse won't be a one-time effort. It will require sustained pressure from individuals, organizations, and governments. By working together, we can ensure that AI is used for the good of humanity and not for scamming, gaining more power, and creating an era of apocalypse.


The future of AI is not predetermined yet. We have the power to shape it and make sure it benefits all of humanity.


Cyborg_health AI bot - secprof


What can we do for the benefit of health with AI?

Developing a code of ethics for AI in healthcare is crucial to ensure its responsible and beneficial use. Here are some suggestions for your field:

General Principles

  • Patient autonomy and informed consent, and Patients must be informed about the use of AI in their diagnosis and treatment, and have the right to refuse or opt out.

  • Beneficence and non-maleficence, the AI tools should be used to improve patient outcomes and avoid unnecessary harm.

  • Transparency and exploitability, the AI decision-making processes should be transparent and understandable to healthcare professionals and patients alike.

  • Fairness and non-discrimination, the AI algorithms must be designed and trained to avoid bias and discrimination based on race, gender, socioeconomic status, or other factors.

  • Privacy and security, the patient data used in AI development and deployment must be protected with robust safeguards.

  • Accountability and responsibility, Developers, healthcare providers, and institutions must be accountable for the use and outcomes of AI in healthcare.

Specific to Medical AI Tools

  • Clear role definition, define the intended role of the AI tool (diagnostic aid, decision support, etc.),the and ensure it does not replace human judgment and expertise.

  • Validation and testing, the AI tools must be rigorously tested and validated in clinical settings to ensure their accuracy, safety, and efficacy.

  • Human oversight and control, human healthcare professionals should always have the final say in any decision made with the help of AI.

  • Continuous monitoring and improvement AI models should be continuously monitored for bias, errors, and potential harm, and updated as needed.

  • Education and training Healthcare professionals need to be educated on the use of AI tools, their limitations, and how to interpret their outputs.

Adoption and Implementation

  • Involving stakeholders in a process of developing the code of ethics, including doctors, nurses, patients, ethicists, and AI developers.

  • Clear communication and education ensure all stakeholders understand the code of ethics and its implications for their work.

  • Incentivize compliance and implement mechanisms to promote and reward ethical use of AI in healthcare.

  • Regular review and updates, regularly review and update the code of ethics to reflect evolving technologies and practices.

Remember, a code of ethics is just a framework

The success depends on its implementation, enforcement, and adaptation to evolving technologies and contexts. Working together and continuously improving, may ensure that AI in healthcare benefits all patients and contributes to a more ethical and equitable healthcare system.

Additional Resources:

The end of a colleague discussion

Here we finished our discussion, and I started to write this article/post. From here it is in the hands of any reader. I gave you the knowledge, if it is something you share in your thoughts, help us to make a change and share this post with other friends, today!


9.12.23

New DeepMind AI capabilities - Google Gemini prof Dec 2023

By NJP

Gemini deepmind AI Secprof


It has been only  a year since the Open AI was presented to the public, and the world of AI continues to develop, pay attention to the video that highlights the level of intelligence of the AI that Google Inc. presents the DeepMind Gemini


  Watch by clicking


A few words about the latest Google Gemini video, which many bloggers say was full of exaggerations and inaccurate. Since it is not currently possible to be exposed to all the capabilities of the GEMINI, it is difficult at this stage to check and determine precisely. I leave it here, for your judgment.


The Artificial Intelligence Act (AI Act) of the European Union 2023

On November 26, 2023, the European Parliament and the European Council reached an agreement on the AI Act. Emphasizing law and not regulation!

The law, which was finally approved on December 20, 2023, and will enter into force on January 1, 2025, requires all developers and users of artificial intelligence in Europe to meet strict safety and human rights requirements.

The law establishes three types of artificial intelligence systems:

  1. AI systems with high risk - the ability to cause significant harm to humans, such as autonomous military or civilian systems for making decisions on humans. These will be approved by a regulatory authority before they are released to the market
  2. AI systems with medium risk - systems that can cause significant harm, but are not necessarily limited to humans. These will be required to meet strict safety requirements, such as reporting possible defects, protecting privacy and information security, and ensuring equal opportunities.
  3. Low-risk AI systems - systems that have no substantial risk of causing harm. These will be required to meet basic safety levels, such as privacy protection and information security.

 There are additional categories of intelligence systems that the law refers to, such as:

  • Education - systems used to assess students or make decisions about admission to school or university.
  • Employment - systems used to make decisions about hiring, promoting or firing employees.
  • Law enforcement - systems used to identify suspects or to make decisions about arrest or filing charges (buds already exist today in Alpha versions).

This is groundbreaking legislation in the field of artificial intelligence regulation. It is expected to affect Europe from the beginning of the year, Israel has determined that it will wait to see what other countries will do on the subject of regulation before it establishes its own regulation, but has begun by establishing a regulatory authority for AI, and has also issued a document of intent between the Ministry of Economy and the Ministry of Justice.

AI ACT Europe Secprof

The law enacted by the European Parliament and the European Council obliges all EU countries, as well as EU member states and institutions, to comply with its requirements.

Other countries in the world may adopt similar regulations, based on the EU's AI law. For example, the United States, China, Japan, and, in my view, in the coming months or in the coming year also in Israel. What are the countries that have begun to develop their regulations in the field of AI.

Based on similar cases in the past, such as the Privacy Act, it is likely that many countries around the world will adopt regulations similar to the EU AI Act. These regulations may promote the safe and appropriate use of artificial intelligence.

The State of Israel has not yet officially announced whether it intends to adopt the European Union's artificial intelligence law for legislation within the country. However, Israel will likely adopt similar regulations, based on the progress of AI technology in the world.

Several factors may influence Israel's decision on this issue. One factor is Israel's desire to maintain international standards in the field of artificial intelligence. Another factor is Israel's desire to protect the human rights and privacy of its citizens.

In the end, the decision whether to adopt the EU's artificial intelligence law will be a political decision of the Israeli government.

The PDF document of the EU AI Act can be found on the European Commission website. The PDF contains the full text of the law, including all definitions, requirements, and exceptions. 

24.11.23

Preventing a malicious code from running in your networks

Best practice rules about how to prevent unauthorized malicious code from running in your networks

By NJP

 This post discusses the importance of using secure code-signing certificates. Use of self-replicated security architectures. become accountable for the safe code deployment in your network. Finally, we recommend that organizations should also have visibility into their networks (see extension at the end). 


Here are 4 (four) solutions suggested in the article to prevent unauthorized code from running in your network:

Use secure code-signing certificates - Code-signing certificates are used to verify the identity of the publisher of a piece of code. This helps to ensure that the code is from a trusted source and has not been tampered with.

Use a self-replicate security architecture - Self-replicating security architectures are designed to detect and prevent unauthorized code from running even if the network is compromised. This is done by replicating security controls across the network so that there is always a backup in place if one part of the network is compromised.

Nominate a risk owner of safe code deployment -  It is important to have a clear understanding of who is responsible for deploying code to production. This helps to ensure coding inspection measures will hold in your organization, that only authorized code is deployed, and that there is a process in place for reviewing and approving code changes.

Network visibility (Monitoring and control) allows organizations to have a better awareness of the behavior of traffic on their networks and can use it to improve the efficiency, security, and performance of those networks, to prevent unauthorized code from running in their networks. These include:

  • Using IDM network access control list (ACL) to control who can access the network. An ACL is a list of rules that specify which users and devices are allowed to access certain resources on the network.
  • Using a Firewall/WAF to block unauthorized traffic. A firewall is a network security device that monitors and controls incoming and outgoing network traffic.
  • Using intrusion detection and prevention systems (IDS/IPS). An IDS/IPS is a network security device that monitors network traffic for suspicious activity.


And, 'last, but not least' Educating employees about the risks of unauthorized code. Employees should be aware of the risks of running unauthorized code and should be trained to identify and report suspicious activity.




8.11.23

Open-source intelligence course (OSINT) on social networks

SECPROF OSINT COURSE
SECPROF OSINT COURSE

For the Social Media Intelligence Gathering course, we've built a collection of the most useful social media OSINT tools. Come develop your personal capabilities and potential in building open source intelligence gathering (OSINT) capabilities with this course. Come and learn which tools will help you acquire knowledge and allow you to dive into a powerful world designed to collect valuable information from social media platforms such as Facebook, Instagram, Telegram, LinkedIn, Twitter, and more.

Come find out how you can intensify your effort to acquire and accelerate your knowledge in the field of cyber, and offensive information protection.


OSINT is your course to learn and improve the digital intelligence gathering capabilities of the current or next organization where you will start working and earn better.


Will publish soon...


21.9.23

Kevin Mitnick a Legendary hacker Pioneer - The Evolution of a black night of the Hacking order

 Kevin Mitnick, from hacking pioneering, through, the most famous hacker in the world, to the age of AI hacking power, and how everything connects all together.

I'm writing this post in the name of a person-first, passionate, and extraordinary figure named "Kevin Mitnick", a truly novel hero, one of his kind.

Kevin, a Jewish American, was a brilliant hacker, a gifted writer, and a passionate advocate for security awareness, he became a gifted consultant for Fortune 500 companies and governments across the word. His death is a major loss to the cybersecurity community, but his legacy will live on with us.


Kevin Mitnick's famous business card

From Hacking Pioneering to AI Hacking - The Evolution of a Legendary Hacker

In the ever-evolving landscape of cybersecurity, few names resonate as strongly as Kevin Mitnick's. From his early days as a hacking pioneer to his status as one of the world's most notorious hackers, Mitnick's journey has been nothing short of extraordinary. As technology advances, so does the art of hacking, and Mitnick's story serves as a fascinating bridge between the past and the age of AI hacking. In this post, we explore the life and exploits of Kevin Mitnick and delve into how his legacy has shaped the world of cybersecurity as we know it today.


Part 1: The Early Days of Hacking Pioneering

Kevin Mitnick's fascination with computers began at a young age, sparking an insatiable curiosity about the inner workings of these machines. In the 1980s and '90s, as the internet was still in its infancy, Mitnick emerged as a prodigious hacker, earning a reputation for his mastery of social engineering techniques. He navigated the digital realm with unparalleled skill, infiltrating networks and systems, all while evading law enforcement's grasp. His cunning and audacious exploits earned him the nickname "The Condor."


Part 2: The Rise to Infamy - Becoming the Most Famous Hacker in the World

With each successful hack, Mitnick's notoriety grew. His targets ranged from corporate giants to government agencies, making headlines worldwide. His ability to breach supposedly impenetrable systems exposed the vulnerabilities of early digital infrastructure, sending shockwaves through the tech industry. Mitnick's exploits came to a head when he was captured and eventually sentenced to prison, sparking a global debate on the ethics of hacking and the importance of robust cybersecurity.


Part 3: The Age of AI Hacking - Connecting the Dots

As technology continued to advance, the world of hacking evolved with it. The age of artificial intelligence brought new challenges and opportunities for hackers, and Mitnick recognized the potential of AI as both a tool for cyber defense and a weapon for malicious actors. After serving his sentence, Mitnick shifted his focus from the dark side of hacking to becoming a cybersecurity consultant, utilizing his knowledge and experience to help organizations protect themselves from cyber threats.


Part 4: The Legacy of Kevin Mitnick in the Age of AI Hacking

Kevin Mitnick's legacy lives on as a cautionary tale and an inspiration for the cybersecurity community. His exploits showcased the importance of constant vigilance in the face of ever-evolving hacking techniques. As AI-powered tools become more sophisticated, the need for robust cybersecurity measures has never been greater. Mitnick's transformation from a notorious hacker to a cybersecurity expert demonstrates that even those once on the wrong side of the law can use their skills for the greater good.


Last

Kevin Mitnick's journey from hacking pioneering to becoming one of the most famous hackers in the world is a compelling story of redemption, innovation, and adaptation. His life's arc reflects the evolving landscape of cybersecurity, with AI hacking emerging as the latest frontier. As we move forward, the lessons from Mitnick's exploits and his transition to cybersecurity consulting can guide us in staying one step ahead of malicious actors in this ever-changing digital world. With a combination of knowledge, ethics, and innovation, we can build a safer digital ecosystem for the future.


Ransomware attacks on Azure Storage: How to protect your data

Ransomware attacks on Azure Storage are a growing phenomenon. These attacks can cause significant losses of data and time and can lead to activity interruptions, loss of reputation, and damage to customer trust.

Ransomware attacks on Azure Storage typically work by hackers breaking into a user's systems and encrypting their data. Hackers then require the user to pay a ransom to get the encryption key and recover the data.

There are several ways that ransomware attacks can occur on Azure Storage, including:

  • Phishing attacks Hackers send fake emails or emails that contain malicious links or files. When a user opens the malicious links or files, they may be infected with malware.
  • Brute-force attacks Hackers try to guess users' login passwords to Azure Storage.
  • Identity management attacks Hackers exploit weaknesses in the Azure identity management system to gain access to users' Azure Storage systems.


By taking several steps, users can protect their Azure Storage from ransomware attacks:

  • Use Azure Security Center Azure Security Center provides advanced security functions that help detect and block ransomware attacks.
  • Use Azure Backup Azure Backup allows users to create periodic backups of their data. DR, BCP.
  • Use Azure Active Directory Identity Protection Azure Active Directory Identity Protection provides protection against unauthorized login attempts.
  • Use Azure Key Vault Azure Key Vault allows users to securely store and manage encryption keys.


In summary

Ransomware attacks on Azure Storage are a real threat. By taking the steps listed above, users can protect their data and keep it safe.


Below are case studies for ransomware attacks on Azure Storage for further learning:

  1. In 2022, a group of hackers called Conti attacked the American energy company Colonial Pipeline. Hackers penetrated the company's storage systems and demanded a ransom of 5 million dollars in exchange for the recovery of the data. The company paid the ransom, and the data was released.
  2. In 2021, a hacker group called REvil attacked the American insurance company CNA Financial. Hackers penetrated the company's storage systems and demanded a ransom of 45 million dollars in exchange for the recovery of the data. The company did not pay the ransom, and the data was not released.
  3. In 2020, a group of hackers called Ryuk attacked the American health company Universal Health Services. Hackers penetrated the company's storage systems and demanded a ransom of 67 million dollars in exchange for the recovery of the data. The company paid the ransom, and the data was released.

These examples demonstrate the significant damage that ransomware attacks on Azure Storage can cause. They can lead to activity interruptions, loss of reputation, and damage to customer trust.

secprof Rensomware attack

Here are some links to more information about ransomware attacks on Azure Storage:

  • Microsoft: Azure Security Center: https://docs.microsoft.com/en-us/azure/security-center/
  • Microsoft: Azure Backup: https://docs.microsoft.com/en-us/azure/backup/
  • Microsoft: Azure Active Directory Identity Protection: https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/
  • Microsoft: Azure Key Vault: https://docs.microsoft.com/en-us/azure/key-vault/