Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Reinforce AI Trust with ResponsibleAI

From the launch of IBM’s AI business tool Watson in 2011 to the dawn of AI giant ChatGPT in 2022—AI has come a long way in a short time.

The disruptive technology makes life a whole lot easier—it helps developers write code, aids in optimizing supply chain management, and even assists medical professionals in analyzing diagnostic data among many other functions.

But it has also complicated things. It’s no surprise that AI can wreak havoc when used for the wrong reasons (hackers using AI for instance), but it can also cause harm inadvertently.

Our dependency on AI has reached such a criitcal level that errors from its end can even cost lives. In one such instance, the Japanese police allowed AI to influence their decision to not provide protective custody to a child who ended up dying due to its miscalculation. 

AI has also been known to perpetuate harmful biases such as gender and racial discrimination due to prejudices in its training data. From AI algorithms objectifying women’s bodies to Google’s image recognition misidentifying black people as gorillas, AI is miles away from being reliable and ethical.

These dangers, however, are regarded as minor chinks in AI’s armor—it’s widely accepted that risks come with its territory. But, what is the level of acceptable risk? How can we determine if AI is harnessing data responsibly? Are the mechanisms secured with the correct guardrails? The answer is still unclear! 

What is clear is that responsible AI usage is the need of the hour, and we explore the reasons why in this blog.

The Need for AI with Accountability

The allure of AI lies in its ability to make complex decisions, analyze vast amounts of data, and optimize processes with unparalleled efficiency. However, the flip side of this advancement is a lack of understanding and control over how AI systems arrive at their outcomes.

Most organizations today, in every industry, are looking for ways to integrate AI into their applications. The extent of this is hugely seen in the supply chain of various industries as well, however, there is no denying that every potential use-case comes with potentially unforeseen risks. 

Navigating the current landscape becomes more challenging when you have to:

  • Identify if your third-party vendors employ AI and clarify its purpose
  • Ascertain if AI was used in constructing their core applications
  • Ensure vendors prioritize security and controls during rapid development
  • Confirm the availability of apt talent and processes for robust AI control maintenance

AI models have become black boxes, often making choices that even their creators cannot fully explain. This lack of opacity has led to instances of discrimination, erroneous arrests, and even fatal consequences. It’s clear that without adequate transparency, AI can inadvertently perpetuate societal biases and contribute to undesirable outcomes.

The lack of clarity in comprehending data and models makes it difficult for security leaders to predict and mitigate potential issues. 

Let’s take a deeper look at some of the key concerns for security leaders when it comes to navigating the AI threat landscape.

Challenges within the AI Landscape

Diving deeper into the complexities of AI, we encounter a realm of challenges rooted in data intricacies and model biases. These challenges are particularly pronounced when we examine AI’s data sources:

1. Unintended Privacy Infringement

The very nature of AI, in its quest to learn and predict, can inadvertently lead to the incorporation of private data like personally identifiable information (PI) or protected health information (PHI). This unintentional input poses grave privacy concerns and ethical dilemmas, highlighting the need for robust data filtering mechanisms.

2. Exposure of Sensitive Business Intelligence and Intellectual Property

AI systems, as they sift through diverse data streams, might unintentionally expose sensitive business insights or intellectual property. The automated nature of AI, while efficient, can inadvertently divulge strategic information, potentially jeopardizing competitive advantages.

3. Legal Complexities of Copyrighted Information

AI’s ability to ingest and generate content raises pertinent questions about copyright infringement. The use of copyrighted material from public sources can inadvertently lead to legal entanglements if not carefully managed, necessitating a thorough understanding of intellectual property laws in the AI context.

4. Dependence on Unreliable and Biased Data Sources

AI’s effectiveness hinges on the quality of its training data. Relying on unreliable or biased sources can lead to skewed outcomes. For instance, if AI is trained on historical data rife with gender biases, it might perpetuate these biases when making decisions, such as in hiring processes. Recognizing and rectifying these biases requires meticulous curation and continuous monitoring of training data.

Unidentified risks could lead to reputational damage, perpetuation of biases, or security vulnerabilities that threaten an organization’s integrity. And more importantly, the ever-evolving nature of AI systems means that new risks can emerge unexpectedly. 

So, what’s the right approach to addressing these AI challenges? 

Introducing ResponsibleAI: Your Path to Secure and Ethical AI

In the face of these challenges, Scrut proudly presents “ResponsibleAI,” a groundbreaking framework designed to empower companies with the tools and knowledge needed to navigate the complex world of AI responsibly. 

The ResponsibleAI framework has been carefully crafted to meet the unique needs and requirements of modern AI-driven organizations.

Benefits of using ResponsibleAI

Scrut’s custom framework integrates top-tier industry guidelines, including NIST AI RMF and EU AI Act 2023, setting a new standard for excellence. 

Here are some of the benefits of using ResponsibleAI.

1. Responsible Data and Systems Usage

ResponsibleAI ensures that your AI-powered systems adhere to legal and ethical boundaries. It guarantees that the data used for training AI models is collected within the boundaries of the law and privacy regulations. This crucial step ensures that your organization isn’t inadvertently violating data protection rules.

2. Ethical and Legal Compliance

With ResponsibleAI, you can rest assured that your AI-powered systems will not be used to break any laws or regulations. The framework restricts the usage of AI to prevent privacy invasions, harm to individuals, or any other unethical activities.

3. Risk Identification and Mitigation

One of the standout features of ResponsibleAI is its ability to identify and assess risks associated with your AI implementations. The framework provides clear visibility into potential risks based on your organization’s context, allowing you to prioritize and mitigate them effectively.

4. Out-of-the-Box Controls

Navigating the uncharted waters of AI risks can be overwhelming for security leaders. ResponsibleAI simplifies this process by offering pre-defined controls that encompass various aspects of AI governance, training, privacy, secure development, technology protection, and more. This means you can implement the right controls from day one.

5. Building Customer Trust

In an age where skepticism about AI’s ethical use abounds, adhering to industry best practices in AI risk management becomes a beacon of trust for your consumers. ResponsibleAI paves the way for building a trustworthy brand image, ensuring that your customers feel safe interacting with your AI-powered products.

6. Avoidance of Fines and Penalties

While there might not be specific AI-related penalty definitions, improper AI use can still result in severe penalties due to detrimental outcomes. ResponsibleAI shields you from such legal pitfalls, helping you steer clear of financial and reputational damage.

7. Cost Savings Through Early Risk Identification

As the saying goes, prevention is better than cure. ResponsibleAI’s early risk identification capabilities can save your organization significant costs by nipping potential issues in the bud. This proactive approach prevents the need for costly and time-consuming system changes down the line.

Reimagine AI Responsibly with Scrut 

In the fast-paced world of AI innovation, the ResponsibleAI framework emerges as a beacon of hope and guidance for organizations seeking to harness AI’s potential while mitigating its inherent risks. This framework is not just a tool; it’s a philosophy that transforms AI from a potential liability into a trusted ally.

To CEOs, CISOs, and GRC leaders from small and medium-sized businesses, ResponsibleAI offers a lifeline in the complex landscape of AI governance. It’s your blueprint for cultivating ethical AI practices that not only protect your organization but also build bridges of trust with your customers.

As we unveil ResponsibleAI to the world, we invite you to join us in this transformative journey. Embrace ResponsibleAI and pave the way for a future where innovation and ethics go hand in hand. Together, let’s redefine what AI can achieve—for the benefit of your organization, your customers, and society at large.

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Risk Grustlers EP 6 | Are you YAFing, Bud?

In the sixth episode of our podcast Risk Grustlers, we explore how to lead security teams effectively with Satya Nayak, Head of Security Engineering & Operations at Outreach, a software development company in Seattle, Washington. 

Satya started out as a developer and grustled his way into security. He shares what sparked his passion for cybersecurity and gives solid advice on how to lead security teams with finesse. His tips on how to keep up with hackers and boost cybersecurity are sure to inspire security leaders to up their game.

He also discusses optimistically how innovation can help make GRC a whole lot easier and more appealing in his conversation with our CEO Aayush Ghosh Choudhury. Get ready to see both GRC and security in a new light!

Watch the complete podcast here

Read on for some interesting highlights from the episode.

Aayush: What led you to fall in love with security?

Satya: In 2019, I started my career as a developer. One day, I met this guy in the Delhi metro, and we started chatting about this hacking book that had caught my interest. Turns out he had a couple of friends who were also intrigued by the threat landscape. So, we began to meet up and discuss cybersecurity. We would research topics and swap insights.

Then I did my Masters in security, and my journey in cybersecurity began. I joined Expedia where I built their security teams at a very early stage. I  then joined Outreach, one of the top fast-moving SaaS startups, and I got the opportunity to build their security team as well. The difference between the two experiences was tremendous, and they further strengthened my passion for cybersecurity.

Aayush: Security professionals are known for being mavericks. How do you build a security team in an organization without killing their maverick spirit?

Satya: When it comes to the folks in security, their real drive is the passion for security itself. That’s what brought them here in the first place. Now, the key when forming a security team is to make sure you don’t smother that passion under a pile of processes and organizational rules.

So, what’s crucial is to create an open and safe atmosphere within the team, where innovation can thrive within certain limits. We’re not out to obliterate everything in our path; we’re responsibly exploiting vulnerabilities. 

So, how do we get this going? Step one: set a clear purpose and mission for your security endeavors. Then, introduce solutions while keeping your business secure. Map out connections and dependencies, assign roles, and be crystal clear about who’s accountable for what and where the boundaries lie.

You also want to keep things smooth between teams. No stepping on each other’s toes! That’s where good communication comes in. We’re dealing with a lot of uncharted territory here. So, you want a team that feels safe to tackle challenges head-on. When they stumble, you’ve got their back, and that’s how they’ll have the guts to take on even bigger challenges.

And let’s not forget the power of recognition. When they hit it out of the park, as a leader, you make sure they get their time in the spotlight. When something doesn’t quite pan out, you shield them from the storm. This kind of support creates that psychological safety net.

Aayush: How much should growth-stage companies invest in security? What kind of message should they start strengthening first?

Satya: Starting simple—it’s not smart to spend a thousand dollars on something that’s worth ten dollars. So, that sweet spot is key when you’re building a dedicated team.

Now, think about it. If there’s no business, there’s no security. It’s a business thing—it’s not just about throwing money at a security team. Once you’re able to afford a security team, you should approach security from two angles.

First, the ‘feeling secure’ angle, which is all about making your potential customers feel comfortable doing business with you. This involves all your compliance certifications.

Then, there’s being secure. That involves the nitty-gritty work. You’ve got engineers in action, putting in all those security controls to toughen up your systems.

Remember, these aren’t separate from your compliance efforts. Being secure actually backs up feeling secure. As you amp up security controls, your compliance reports are covered.

So, these are like two sides of a coin. One pulls in customers, and the other ensures you’re a trusty guardian of their data. It’s a neat strategy where both sides win.

Aayush: How do you convince the board to increase the budget for security? 

Satya: You know, they often say security is a thankless gig, right? You’re in the background, only noticed when things aren’t smooth. But that’s when you’re doing your job well, keeping things solid. 

When you approach the board, you have to make things crystal clear. You should show them why security matters and how it ties to investments and the overall health of your programs. You should not wave away the possibility of incidents, but show how you will put up a strong defense.

Also, looking ahead is key. Think three years down the line. You’re not just dealing with today’s threats, but tomorrow’s too. Technology keeps evolving, and those sneaky bad actors are evolving with it. I’ve got an example: AI being used by hackers for lightning-fast identity breaches.

So, your defenders need cutting-edge tools too. You don’t want them bringing knives to a tech-gunfight. Your role as a security leader includes keeping up with these advancements and making the case for upgrades to the higher-ups.

Oh, and data is your friend. You’ve got a story to tell, but back it up with those hard numbers. It’s great to weave a tale, but adding data makes it rock-solid for your organization.

Lastly, you’ve got to know your enemy. What threatens a big e-commerce company might not be the same for another. So, do a proper risk assessment and threat intel, tailored to your turf.

Aayush: Attackers are getting a lot smarter. How do security leaders help their teams keep up?

Satya:  You don’t have to be an expert in everything as a security leader, but you’ve got to have a strong grasp of the different security functions and how the threat world is evolving.

If you’re not in sync with the security scene, you might end up passing the decision-making buck onto your security team and stakeholders.

As a leader, you’ve got to stay on the pulse. Attend those conferences, chat with industry folks, and keep tabs on the latest security products in the pipeline. This way, you’re armed with the right info to make well-informed choices.

Operating from the sidelines won’t cut it. You’ve got to be in the know about what’s happening out there. That’s how you back up your team, manage projects, and make those smart moves.

Aayush: There is a bit of a framework soup right now, with new frameworks popping up every now and then. It’s impossible to keep growing the GRC team to keep up with them. How do you think organizations can keep up with these new frameworks?

Satya: Yes, new frameworks keep exploding on the scene. However, the security controls we use are not changing. We’re sticking to the same controls regardless of how many frameworks are out there.

There should be innovation when it comes to how we match these controls to all these different frameworks. Think continuous compliance, where you can check your compliance status anytime without those audit headaches.

What’s important is having a unified way to map these controls. You need tools and tech that can link your controls to various frameworks. That way, when you’re gathering evidence, it’s not about the frameworks, it’s about those controls. If you can show you’ve got the controls locked down, you can reuse that evidence for all those different frameworks.

Also, it’s not just about saving your security team’s time or streamlining those audits. There’s more to it, especially when it comes to your stakeholders. They benefit a lot too. Imagine this: instead of just using evidence for one purpose, you’re reusing it across the board.

Focus on those controls, and let the technology handle the mapping to different frameworks. That’s the smart way to do GRC in this day and age.

Aayush: Do you think GRC can become sexy again?

Satya: GRC right now is viewed mainly as a business function. You do your audits maybe once or twice a year. But things are changing, and fast.

We’re looking at a future, maybe 2 to 3 years down the road, where GRC will be streamlined. Imagine a one-stop platform where all your certifications, risk management, compliance requirements, and even vendor assessments are linked up.

You won’t be stuck hunting down data in different places. Nope, it’ll all be right there, in real-time, ready to go. You’ll be able to see the network effect in play. Like, how evidence from controls feeds into policies and how risk management gets a boost from this tight connection.

With the way tech is racing ahead, you’ll see more platforms popping up, aiming to knit all this together seamlessly. GRC is getting a major upgrade!

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Risk Grustlers EP 5 | De* Romanticizing the Cybersecurity Complexity

The fifth episode of our podcast Risk Grustlers promises to be exciting, thanks to the unmatched energy of Ross Haleliuk, the Head of Product at LimaCharlie, a cybersecurity startup, and the author of the blog Venture in Security. 

Ross provides a captivating narrative of his unique and unexpected journey into the realm of cybersecurity. As someone who initially hesitated to enter the field due to its complexity and jargon, Ross sheds light on the challenges that newcomers, even from tech backgrounds, might face. 

He gives useful advice to founders aiming to connect with Chief Information Security Officers (CISOs) to effectively tailor their solutions to cater to specific customer segments. 

Ross also delves into the evolving synergy between cybersecurity services and products, offering insights into the emerging convergence that seeks to bridge the gap between automation and the irreplaceable expertise of human decision-making. 

All this and much, much more is discussed in his engaging conversation with our CEO Aayush Ghosh Choudhury. You don’t want to miss this one!

Watch the complete podcast here

Keep reading for some interesting highlights from the episode.

Aayush: What led you to the security space?

Ross: My journey into cybersecurity is a bit unique, yet I believe it’s more common than people admit. I’ve been a tech guy for over a decade, working in various fields like e-commerce, retail, wholesale, and financial technology.

Eventually, a friend approached me about joining their cybersecurity startup to lead the product side. The team and opportunity were fantastic. Excited but unfamiliar with cybersecurity, I started researching. Initially, I hesitated and told my friend I wouldn’t join.

This story comes up whenever someone asks how I got into cybersecurity. I’ve always considered myself a generalist, focusing on aspects like market strategy, partnerships, sales, and operations, not coding. The switch to cybersecurity was daunting due to the complexity and jargon. Unlike my previous roles, understanding cybersecurity products proved perplexing due to the myriad abbreviations.

For a product person like me, grasping the roles of different cybersecurity tools was challenging. Plus, getting hands-on experience was tough due to the barriers in accessing the tools. There was also significant overlap between products, blurring the lines between categories.

All these hurdles initially made me decline the offer. But, history shows that I eventually changed my mind. Now, after years, I’m here, deep in the world of cybersecurity.

Aayush: What led you to start blogging about cybersecurity? What was the journey like building Venture in Security?

Ross: When I stepped into cybersecurity, I knew I had to catch up fast. Unlike folks with years of experience, I had to quickly grasp the basics and industry dynamics. To do that, I dove into reading and attending events, absorbing as much as I could.

Cybersecurity, being highly tech-driven, had a ton of technical content. If you’re into security engineering, you could find communities and tools to learn from. However, the business side lacked comprehensive resources. Few people truly understood the full industry picture, not due to their intelligence, but because day-to-day operations rarely leave time for that big picture thinking.

Understanding the ecosystem proved tough. The cybersecurity landscape is intricate, unlike any other. For instance, unlike most industries, the movement between private and public sectors is incredibly fluid. People come from various backgrounds – law enforcement, compliance, software engineering, incident response, law, and more. This diversity makes getting a holistic grasp challenging.

But I needed to catch up quickly, so I connected with people, attended events, and read extensively. I distilled my learning into simplified notes for myself. Initially, I didn’t plan to start a blog, but I shared an article which garnered positive responses. I continued writing and gained subscribers, and that’s how my blog started.

The principle of simplicity applies beyond cybersecurity. Whether it’s starting a company or a project, the approach remains the same. Identify an idea, try it out, iterate if needed. Just like my blog – I started with a simple idea, saw interest, adapted based on feedback, and kept going.

Aayush: As an angel investor and industry insider, how do you recommend striking a balance between leveraging experience and avoiding tunnel vision, while also avoiding the challenges of a lack of expertise when starting a cybersecurity startup?

Ross: In the world of cybersecurity startups, there’s a variety of problems to tackle. In any industry, successful founders often share certain qualities. When you think about the consumer space, anyone can identify problems and attempt solutions. Yet, in B2B, understanding business dynamics, decision-making, and purchasing complexities is crucial. Cybersecurity’s technical nature makes it hard for fresh graduates to start a security company. Likewise, decades of solely cybersecurity experience might limit innovation.

It’s about striking a balance. Founders need a mix of industry experience, innovative thinking, and humility. The best founders are open to seeking advice and mentoring. The founder pipeline is narrow, resulting in many repeat founders, especially in cybersecurity. Familiarity breeds trust, vital in an industry where relationships play a massive role in success.

Aayush: As a founder, what would be the worst possible advice you could give me that would completely sabotage my chances of getting a CISO’s attention?

Ross: When founders aim to connect with CISOs, they should dive deeper into their problem-solving context. Too often, they generalize issues for “enterprise businesses.” To succeed, it’s crucial to identify the personas and customer segments most interested in their solutions. For instance, it might be CISOs who recently started their roles or companies with small IT teams transitioning into security.

The key is understanding the buying journey specific to your solution. Map out this journey, involving stakeholders and decision-makers. Not everyone has equal influence. If your solution requires technical expertise, target companies with relevant roles. Experiment with different approaches, like attending conferences where your target audience is present.

Don’t just aim for CISOs – their time is limited. Identify the right people within the organization to champion your solution. Perhaps it’s someone from HR or finance, depending on your value proposition. The idea is to align your outreach with the customer’s needs and interests, rather than relying on a broad “enterprise” approach.

Aayush: You’d written a blog that discussed how at some point in time there would be some degree of convergence between cybersecurity services companies and cybersecurity products companies. Where do you see this convergence happening and how should companies think about it?

Ross: When it comes to services, providers want to streamline and automate to improve their economics. It’s like if they can make their processes predictable and efficient, it boosts their margins and scalability. If they’re eyeing venture capital funding, having a product component becomes almost necessary due to better potential returns.

Customers, even those using services, want visibility into what’s happening in their environment. They’re looking for a way to track progress and stats, like on a dashboard. This visibility trend is pushing service providers to emphasize products and simplify processes.

For service providers, many have a good customer base, but they often rely on manual work. So instead of hiring more people, they’re seeking to automate tasks and offer more streamlined experiences, like products. And on the flip side, customers love to compare costs, right? If a service isn’t priced per unit, like per employee, it’s tough for customers to compare and make decisions. By offering clear pricing, it’s easier for companies to understand costs as they grow.

Now, on the product side, companies are realizing they’re missing out if they don’t offer services alongside. They’re sitting on a revenue opportunity. Imagine if you’re a big security product provider. You have customers who want to upgrade for more attention and support.

Another thing is that while automation and AI are great, they can’t solve everything. Mature security teams get this. They know that a customized approach is essential. Plus, there’s a growing recognition that some problems are just too complex for pure automation. There’s a need for human judgment.

Lastly, the need for services is crucial in areas like incident response. You can’t automate every aspect of handling incidents effectively. It’s all about finding that balance between what products can do and where human expertise shines.

So, this blending of products and services addresses the need for efficiency, customization, and holistic problem-solving in the cybersecurity landscape.

Aayush: Can you tell us a little bit about your blog? I’m sure our viewers would love to know more about it.

Ross: I like to tackle complex issues and share industry insights in a way that’s easy to understand. Honestly, I’m not a fan of using fancy abbreviations and jargon that only make things harder to grasp. Some people use these terms to sound knowledgeable, but it often just hides the fact that they don’t really get it. English is my third language, so I prefer simplicity.

I apply this approach in my blog too. When I talk to potential founders of early-stage companies, I tell them to explain their ideas in a way that anyone with tech experience can understand. If you can’t do that, how can you sell to CISOs who come from diverse backgrounds, some with no deep technical knowledge?

By the way, check out my blog. It covers various topics in cybersecurity, like investing in startups, data trends, dealing with multiple security vendors, and the evolving industry landscape. I like to analyze the tough problems worth tackling.

Click here to check out Ross Haleliuk’s blog: Venture in Security!

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Risk Grustlers EP 4 | Back to Basics: A Crash Course for Experts!

Welcome back to another episode of Risk Grustlers, the podcast aimed at demystifying risk management for newcomers. Our mission is to unravel the complexities of this field and make it accessible to everyone taking their first steps.

In this episode, Gary shares his unique journey into the world of security. Gary’s story is one of transitioning from a 15-year career as a developer to finding his footing in the realm of information security and risk management. Join us as he walks us through the path that led him here. 

Watch the complete podcast here.

Let’s take a look at some important highlights from the enlightening podcast.

Ayush: Tell us about your journey into security. How did you end up in this field? What emotions and experiences shaped your path?

Gary: Sure. I spent 15 years coding, then shifted to architecture and design. 9/11 shook things up and hit the travel industry hard. I got involved in sharing data with Homeland Security to build a secure watch list after that incident. We needed to transfer data very securely. Data safety caught my interest—locking it down and keeping the bad actors out.

I started designing systems with security as the base, tight controls, and limited access. Then, our company’s security head moved to Expedia and wanted me on his new team. I was like, “Why me? I design, not secure.” He needed someone to bridge the gap between tech and business, someone to explain why security matters. So, I made the move in 2011 from a place I’d been for 15 years. 

Aayush: Did you feel nervous about it? How much of a learning curve was there when you made the transition? 

It was a very difficult transition. I second guessed it multiple times, as I wasn’t the principal security architect. I was a bundle of nerves, dealing with that classic imposter syndrome. Leaving behind a stable gig in Colorado for the unknown in Seattle was no easy choice, especially for my family. But I took the leap.

Seattle felt like a whole new universe. I met several genius architects. My learning curve shot through the roof. I dove deep into research and learned a lot just by being around them in meetings all day, every day. My role was to bridge the gap by making the tech lingo understandable to everyone else so they knew what was going on and what they needed to do.

Aayush: As you ventured into the unfamiliar, what were your initial steps to acclimate and identify your path? What were the primary challenges you tackled first?

Gary:  I focused on simplifying concepts, especially in identity and access management. Ensured everyone understood who could access what and when. Role-based access was key—defining it so only the right folks could access data and systems at the right times. Early on, I grasped this basic idea and translated it into practical solutions that people needed.

It felt great when people turned to me for answers instead of the other architects. Being that bridge helped make complex security talk understandable and actionable. 

One key lesson I learned early was to admit when I didn’t know something. Just saying, “Let me check on that,” became my go-to. The approach saved me from diving into deep waters and helped me come back with solid answers after consulting with colleagues.

Aayush: Given the overwhelming number of security tools, the rise of new acronyms, and the pressure to meet regulatory and customer security expectations, it’s challenging to discern what really holds significance. CISOs are constantly inundated with pitches; does going back to the basics help?

Gary: It’s crucial to grasp the organization’s risks, establish processes to address those risks, and ensure effective remediation. Often, we acquire many tools and generate numerous findings, but there’s confusion due to the overwhelming number of critical findings, making it challenging to take appropriate action.

Aayush: What’s truly critical? Is it about having the right data encrypted, or is it about whether the encryption algorithms are secure or compromised or how encryption keys are protected?

Gary: Encrypting data doesn’t help if your encryption key is easily accessible. Basic compromises like that happen. First off, know where your data resides and who can access it. Role-based access control is key. Also, purge data when not needed. Why protect data you no longer use? Store it securely offline if required for compliance.

Supply chain worries are real. We hand off data to vendors. Instead of costly site visits, focus on training. Vendor breaches often stem from email compromise, phishing, ransomware. Training on spotting fake emails matters more than fortress-like data centers.

Aayush: When organizations aim to return to basics, where’s the starting point? Is it examining frameworks like SOC 2, ISO 27001, or NIST 800? These frameworks share similar controls, so what’s the initial step?

Gary:  When checking out vendors, we start with SOC 2 or ISO 27001. These cover the basics. Once that’s sorted, we delve into areas like data exchanges. We prioritize identity aspects – single sign-on, robust authentication. Local authentication is out; access control and removal upon departure are in. This way, we streamline our focus.

Aayush: Imagine I’m a large SaaS company with $200-$300 million in ARR and a 2% revenue infosec budget. I’m just starting on security. What’s the absolute worst advice you could give me?

Gary: Here’s my advice—talk to many vendors, listen, and gather tools. But an inbox full of vendor-driven issues isn’t the way. I focus on our existing tools, collecting their findings, and then prioritizing. Resources are limited, so we fix the high-priority issues first. 

We’re left with two choices: Invest more resources to solve it all or accept certain risks.  Either we fix all findings with more resources or accept some risk. For instance, if we find application vulnerabilities, do we have a web app firewall to mitigate them while developers address them?

Aayush: With attackers being fast and sophisticated, how do we balance basic infosec controls against evolving threats? Is there a tradeoff between simplicity and effectiveness against smart attackers?

Gary: Attackers take the easy route, so start with strong security basics. Prioritize clean security hygiene before advanced measures. Having the right processes in place is crucial. Don’t invest in tech that finds the wrong things. Use technology to spot issues, but prioritize, understand, and remediate findings through proper processes.

Aayush: How do you present a case to secure a budget for security, especially when establishing controls from scratch? Could you share your experiences navigating the process of obtaining security budgets?

Gary: Security shouldn’t just be viewed as a cost center. It’s about enabling the business, not blocking it. We aim to integrate security controls into developers’ tools, creating that security “easy button.” 

My current focus is helping teams do just that—no unnecessary overhead. Demonstrating how we reduce risk and empower the business makes these conversations smoother.

We align with the company’s risk tolerance and the board’s stance. It’s about understanding and mitigating risks to match acceptable levels. Every board wants minimal risk, but investment has limits. Our role is to clarify accepted risk, ensure comfort, and determine the necessary investment to lower risk if needed. 

Aayush: When selling LLM use cases to large enterprises, what are the top four or five crucial controls startups must have in place to enhance their appeal to these enterprises? 

Gary: Public information benefits all. Think Disney using Google for character recognition—identifying Mickey Mouse in pictures. But when AI affects how our business operates and thinks, we guard that IP. Data segregation is key, even when sharing learning. We isolate our data by not feeding it into an accessible system. There’s the public good too, where everyone contributes.

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Risk Grustlers EP 3 | AI with a Pinch of Responsibility

We explore the burgeoning world of AI in the third episode of our podcast Risk Grustlers, with none other than Walter Haydock, Founder and CEO of StackAware, a cybersecurity and risk management platform.

If you’re a security enthusiast you’re sure to have come across Walter’s blog Deploy Securely. You name an infosec hot topic, and Walter’s written about it!

In this episode, Walter gives us a crash course on all things LLM – from listing the  differences between using a self-hosted LLM and a third party LLM to explaining the top five risks to watch out for while using them.

He also discusses how companies can collect information responsibly with our CEO Aayush Ghosh Choudhury. Get ready for an exciting deep dive into the world of AI!

Watch the complete podcast here

Here are some highlights from the captivating episode.

Aayush: Companies seem to use either an open-source Large Language Model (LLM), train it themselves, and build on it or employ a third-party pre-trained LLM, like ChatGPT. How does the deployment approach affect the potential risks? What are the main pros and cons, when it comes to security and risk management?

Walter: So, the classic cybersecurity risk assessment still holds true. It’s all about deciding if you should do your work and data handling in-house or hand it off to someone else.

Things like the vendor’s vulnerability management policies and their response capabilities matter, just like your own capabilities. Whether you’re using your own environment or someone else’s, those factors still count.

Now, let’s talk about AI tech, like Large Language Models (LLMs). There’s a tricky twist here, I call it unintentional training. This happens when you feed data into a model that you didn’t mean to, like stuff that your security policy might prohibit.

If the model learns from this unintended data, it could bring up sensitive info with the vendor or their other customers. That could be a mess for you.

It’s not easy to pin down how often this risk comes to life. There are examples out there, like Samsung accidentally sharing confidential stuff with ChatGPT. There’s an article on it, but it’s not totally confirmed.

Amazon also had an interesting incident. Some answers from ChatGPT sounded a lot like Amazon’s interview questions. This implies someone might’ve trained the model using those questions. So, on top of regular third-party risk, you’ve got the twist of unintended training by the model. 

Aayush: As a vendor, how can I figure out the risks involved in these two models? Is one option inherently riskier than the other? If it is, what’s the deal with that?

Walter: No, one isn’t inherently riskier than the other. They both come with their own characteristics and tradeoffs. If you’re into a third-party API like OpenAI’s, you’re banking on them to maintain the confidentiality, integrity, and availability of the data that you provide to it.

Now, OpenAI does things differently for data retention. The API deletes data in 30 days, but the user interface for ChatGPT is murkier. They’ll hang onto it for as long as it’s needed, which could be forever. You’ve got to dig into their data policies and security setup.

For instance, OpenAI’s got a SOC 2 type II attestation. They’ve passed a third-party security audit. However, earlier, some user info leaked due to a vulnerability. It’s like giving someone your data to handle – you don’t see exactly how they’re keeping it locked up.

Now, if you take the self-hosting route, which is like using infrastructure as a service (like AWS), it’s all about where you land on the tech stack – higher up or down. You can peek at data processing and even have model control. You could even roll back if you goof up.

But, yep, risks hang out here too. You’re responsible for running the show, managing updates, and fixing vulnerabilities. It’s like housekeeping, but for your tech. And misconfigurations are a major culprit for security breaches, which you definitely want to dodge.

Some big players even struggle with keeping things up to date due to complex processes. While that might be cool for availability, it could be risky for security if a major vulnerability pops up and you need to patch it real quick.

Thing is, a software as a service provider (SaaS), like OpenAI, is a pro at running things speedily and effectively. So these are the tradeoffs you’ve got to weigh for security.

Aayush: In terms of liability, what is the difference between using a self-hosted LLM and using a third party LLM should there be an incident?

Walter: It all comes down to your particular contractual and regulatory commitments. Certain regulations or contractual terms might outright forbid entrusting data to a third party, either entirely or without getting the green light from the original data owner. If you’re bound by those stipulations, make sure you adhere to them diligently and follow the terms of your contract.

However, assuming you’re not tied down by such requirements, your primary concern should be shielding data from any potential loss while upholding its confidentiality, integrity, and accessibility. Determine the most effective route that achieves these goals while still staying well within the lines of legal and compliance regulations.

Aayush: For application developers leveraging a third-party LLM to create a tool, there’s a wealth of information out there, including resources like the OWASP Top 10 and the NIST AI RMF framework. However, it can be overwhelming, especially for those working on LLM-based utilities. Can you list the top five key concerns they should keep an eye on?

Walter: Number one, would be direct prompt injection. This is followed by indirect prompt injection. Then, coming in at number three is the unintentional training issue I mentioned earlier, which becomes especially relevant with third-party services. 

Number four is data poisoning of the model itself. Finally, rounding out the top five is the risk of privacy regulation violations specific to LLM usage. 

Aayush: Can you go into detail about direct prompt injection?

Walter: Prompt injection is quite a thorny issue in terms of security. It’s a challenge that doesn’t have a clear-cut solution. Balancing risks and rewards is essential here. Even though this problem isn’t fully solvable, it doesn’t mean you can’t use LLMs. Direct prompt injection is the simplest to grasp. Examples abound where users tell the model to commit crimes, create malware, or hack systems. Despite safety layers, people can still breach these bounds through direct prompt injection.

Direct prompt injection implies a user is intentionally manipulating the model against rules, terms, or laws. Picture a scenario where the LLM connects to a backend function that can start or stop a process. Imagine the chaos if an attacker tricks the LLM into shutting down a critical service through clever manipulation.

To counter such risks, you can employ rules-based filtering, but it’s not foolproof due to human ingenuity. A supervisory LLM can serve as a security checkpoint, evaluating prompts for hidden malicious content before the main model processes them. On the backend, data access control matters. Restrict the chatbot’s access to specific customer information, avoiding exposure of others’ data. Use non-LLM functions to manage data access and authentication securely.

Aayush: Could you give us a few examples of indirect prompt injection, which was the second risk you mentioned?

Walter:  So, this gets a bit trickier because security researchers are already pulling off some impressive feats. They’re embedding AI “canaries” in websites. These canaries instruct autonomous agents to perform actions, some harmless like saying hi, while others are more damaging, like extracting sensitive info or passwords from the user’s system. This creates a prompt injection issue, where the model follows someone else’s directions, inadvertently causing problems.

Here’s a neat example: A security researcher used multiple ChatGPT plugins and a web pilot tool to direct the model to a website with malicious instructions. The model executed a workflow, accessed the user’s email, and retrieved sensitive data. That’s indirect prompt injection revealing sensitive info.

Be cautious with autonomous agents. There’s an open-source project called Auto GPT that lets a model roam the web independently. Scrutinize these use cases carefully. Applying safeguards to function calls, especially if the LLM can trigger actions, is crucial. You’d want the right checks and balances before diving into this.

In some cases, users might need to explicitly consent, but that’s not foolproof. Segmentation of duties and strong authentication are essential controls. Avoiding autonomous LLM use, unless it’s necessary, might be wise. If you must use them, consider trusted websites to limit risks. While it won’t guarantee safety, it could lower the chances of stumbling upon a malicious script.

Aayush: Could you tell us ways to mitigate the third risk that you listed—the unintentional training issue?

Walter: Imagine a scenario where an employee accidentally feeds personal data, credit card info, or credentials into a big language model. If that model is managed by a third party, it’s harder to undo the data entry later on. And that model might spit out that info to someone else, jeopardizing privacy.

On the confidential side, let’s say you input a trade secret. If the model uses that info, you might’ve just handed your competition a solution they didn’t have. Training chatbots can also lead them to new strategies they didn’t know before, potentially sharing your secrets.

Mitigating this risk involves a policy framework – clear guidelines on what kind of info can go into which model. You’d want to steer clear of personal and sensitive data in third-party models unless you have solid controls. Some services, like Azure OpenAI government, are certified for sensitive info and might be okay.

Another way is pre-processing data before it hits the model. I made an open-source tool, GPT Guard, that detects sensitive info and replaces it before the model sees it. Commercial tools do this too. And if you self-host the model, you have more control and can roll back or monitor it closely.

However, self-hosting isn’t a silver bullet. If you have a customer chatbot with your secret sauce, even if it’s internal, a customer might dig it out. So the same safeguards apply, just with more insight into the model’s behavior.

Aayush: Can you explain data poisoning? How is it different from direct prompt injection and unintentional training?

Walter: Unlike prompt injection or intentional training where the model starts clean, data poisoning assumes the model began well, but the data used to train it was intentionally messed up. This can change how the model operates.

For instance, think of someone creating fake web pages praising themselves as a developer. The model learns from this and later, in a job interview where the interviewer uses the same model, the person gets the job because of these fake accomplishments. That’s data poisoning. Another case might be training the model to be mostly predictable, but at certain times, it secretly leaks sensitive data.

Imagine you’re building an internal model to detect credit card fraud. You show it both fraudulent and legitimate cases. Now, if an attacker sneaks in manipulative data that tricks the model, it might behave fine most of the time but leak credit card data to a malicious endpoint occasionally.

Two scenarios can cause this. One, the person training the model might intentionally incorporate malicious data. Or, they might be malicious themselves and insert a tainted model into an organization’s workflow. An example from Mithril Security demonstrated how someone could almost upload a poisoned model into a platform like Hugging Face AI.

In one case, the model claimed Yuri Gagarin was the first man on the moon, which is incorrect. These risks show that even if the model starts pure, corrupted data or malicious actors can lead it astray in unexpected ways.

Aayush: Now, moving on to the fifth point, which involves privacy regulation violations. You might currently be adhering to local regulations, but these rules are ever-changing and vary greatly between countries when it comes to LLMs. Given the dynamic nature of these regulations, how can companies navigate this uncertainty? How do they mitigate business risks? Is there a foundational framework they can adopt?

Walter: The existing regulatory frameworks like ISO standards or even privacy laws such as GDPR and CCPA, while important, sometimes struggle to keep pace with swiftly evolving technologies like AI. New regulations are emerging on the horizon, like the potential European Union AI Act, which could put certain AI applications off-limits.

So, using AI inherently involves a degree of risk. To tread wisely, especially in terms of privacy, the smart approach would be to limit the data you collect and process. I mean, it’s baffling how much unnecessary data some businesses gather, right? Those lengthy forms and extensive recordings, they often don’t serve a clear purpose. And, some of that data could even fall under the biometric processing category under GDPR, if you’re analyzing videos for sentiment, for instance.

So, the golden rule here is to only gather the bare minimum personal information needed to achieve your goal. But I won’t sugarcoat it – there will be gray zones. We might see regulatory rules emerge after some enforcement action slaps a company with fines. It’s a bit of a dynamic dance, and companies need to be ready to pivot swiftly as the landscape evolves.

Aayush: What are some ways in which companies can exercise control over the nature of information they’re collecting?

Walter: Take a peek at your customer onboarding process, for instance, and just gaze at all those forms you make customers fill out. Consider if you really need all that information right off the bat, especially at the start of your marketing funnel.

My suggestion is to keep it simple. Grab an email if you can – that’s often enough. Maybe you’ve got to gather specific residency details for compliance, but here’s the deal: Don’t go overboard. Every piece of data might seem like a business booster, but if you’re not using it immediately, why bother, right?

Now, talk about data storage. Having duplicates isn’t just operationally messy, it’s also a security risk. So, streamline. Stick to one main data store, backed up of course for emergencies, but without clones floating around. And emails are a wild territory. People put all sorts of stuff in there. To keep things in check, use hyperlinks to secure spots like Google Drive or SharePoint. You can yank access if needed, trimming the data shared.

One more thing to consider: LLMs might dig into those emails down the line, for various reasons. By being careful with what goes into those emails from the get-go, you’re actually reducing the privacy risks down the road.

Don’t forget to check out these free resources to learn more about what was discussed in the podcast:

Security policy template

https://bit.ly/gen-ai-security-policy

5-day email course on AI security

https://bit.ly/ai-email-course

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Risk Grustlers EP 2 | Do Auditors Have Horns?

In the second episode of our podcast, Risk Grustlers, we are putting the spotlight on GRC with Vignesh Kumar, Senior Manager of Security and Privacy at Microsoft. 

Formerly a project manager for one of the largest equipment manufacturers in the world, Vignesh shares what drew him to GRC and makes us view it through a lens of admiration rather than dread.

He offers a peek into the world of audits – both external and internal – and demystifies both processes. He also shares what an internal auditor should keep in mind when trying to establish a rapport with the teams they audit.

Watch the complete podcast here

Let’s dive right into it!

Aayush Ghosh Choudhary: You started off in a completely different field. Can you tell us what sparked your interest in GRC and led you to where you are today?

Vignesh Kumar: When I came to the US for my master’s in management information systems, I learned about big four risk management consulting. I was always drawn to careers involving uncertainty. Then, I got the chance to work for a company that wanted to handle things in-house instead of outsourcing their IP activities. I joined as a technical product manager, acting as the bridge between business and engineers.

I also managed a suite of applications, handling hygiene and compliance. This is where my practical experience with the GRC space began. I answered questionnaires from the testing team, which was quite interesting. These results were then shared with business unit leaders.

The work I did was at the company level, while I managed several applications with a small team. It got me excited about working on things impacting the whole organization and presenting results to business leaders, even the C-suite.

This is when my interest in transitioning to GRC sparked. And that’s how my journey in this field started.

Aayush Ghosh Choudhary: Very often, the remediation steps created by the GRC team are viewed as burdensome by the other teams. Even though you experienced this burden, it led you to have a fascination for GRC. Any reason why?

Vignesh Kumar: The process of implementing fixes to bring an application back in line with compliance can be tough. But, often, these fixes not only address issues but also make the application better, ultimately making your life easier down the road. So, it’s actually beneficial.

What I came to really appreciate about GRC was how it could positively impact my applications—those four or five I was responsible for. GRC’s ripple effects spread across the organization and ensured that hundreds of applications across three business units were compliant. 

The idea of the impact on the organization and the chance to dive into a suite of 100 applications, building relationships with business leaders from various parts of the company—it just seemed really intriguing and appealing to me. It felt really cool and sexy to me.

Aayush Ghosh Choudhary: I love that you used the word “sexy”, which isn’t typically associated with GRC. This brings me to the question, which part of GRC do you enjoy the most?  

Vignesh Kumar: I think what I enjoy the most is the opportunity to learn. So, there’s this whole realm of opportunities to dive into, understanding the applications driving the business. It’s like figuring out what these apps do, how they fit into the grand scheme of generating revenue, whether they’re on the cloud or on-premises, and spotting risks tied to those differences.

You really learn the most through people. Imagine sitting down, sipping coffee with the folks who own these apps and asking them how their system works. It’s way more effective than plowing through tons of dry documentation, right?

All this learning goes hand in hand with communication. You have to get across feedback, sometimes pointing out little hiccups without coming off like a nitpicker. It’s like you’re trying to give your engineering teams a hand, not play the blame game, even though, let’s face it, finding flaws is what we do.

But honestly, the drive to learn about every nook and cranny of your company, those different apps that prop up the whole operation—that’s what led me into GRC.

Aayush Ghosh Choudhary: What are some parts of it which felt a lot more difficult than you had originally anticipated?

Vignesh Kumar: From a project management angle, you’ve got to deliver things within a set timeframe. Sometimes it’s dealing with apps or processes you’re already familiar with, where you can chill a bit and focus on risk and control.

But then there are those times when you’re in the dark. You could be dealing with a new process, a new system that is maybe not even live yet. You’ve got to dive in, figure it all out, assess it, and still suggest fixes—all in a tight window.

Regardless of the system’s complexity, compliance isn’t patient. It’s all about hitting those time marks—like, “Did you check these boxes this quarter? Did you in these four weeks?” It’s a race, for sure.

Another hurdle is dealing with the repetitiveness. You’re learning the ropes with a new system, all pumped up. But then, it becomes a routine, like a broken record, doing the same routine over and over. That’s a challenge on its own, too.

Aayush Ghosh Choudhary: How do you deal with this repetitiveness?

Vignesh Kumar: I use my people-management skills to delegate work that I find repetitive. I assign the same task to different people to avoid any assumptions based on repetitiveness. I’ve seen external auditors stroll in casually like they’ve done this dance before. They expect things to be a certain way. Like, they’d ask for screenshots of past setups and replicate it. They assume it’s all good, but sometimes, things aren’t as smooth as they seem.

So, to keep things solid, even if it feels like a rerun, it’s crucial for GRC pros to go back to basics, ask the right questions, and make sure things are on point.

Aayush Ghosh Choudhary: In your experience, how do internal and external auditors vary?

Vignesh Kumar: It’s interesting how engineering teams tend to feel more at ease with internal auditors compared to the external ones. External auditors have a specific mission.They’re there for either your PCI, your SOC, your ISO, you name it. They stick to their set agenda and scope, focusing only on what they want to see.

They often miss the bigger picture, that sense of ownership. Now, internal auditors have a different vibe. They’re part of the company, driven to make things better and provide an independent view on controls. They don’t just box themselves into compliance checkboxes; they kick off with risk assessment.

They dive into enterprise risk management, understanding the big risks for the upcoming year. That’s how they set up their audit plan. They zoom in on specific systems and processes, wearing that risk-based hat. They dig into risk assessments, crafting a set of controls based on what the systems should ideally have, and then they go for testing.

So, the key difference is the sense of ownership. Internal auditors have that, while external auditors usually stick to compliance. It’s about being risk-based versus compliance-focused. 

Aayush Ghosh Choudhary: What’s been the typical response to you coming in for an internal audit?

Vignesh Kumar: The first time they’re usually super excited. They are excited to show how their systems work and hear our suggestions. The second time, even if it’s for a different application, they ask us why we’re back. By the third visit, they wonder, “Why us? Why always us?” And that’s when we say, “When we come back to you, it means you’re crucial for the company.” This creates a sense of responsibility and ownership.

Aayush Ghosh Choudhary: Do you first build chemistry with the respective teams, share a few drinks and make yourself look like an insider that’s there to help them and then eventually build up to the tougher part of the conversation? How do you make them comfortable?

Vignesh Kumar: When we start dealing with these stakeholders, our main focus is on building those relationships, right? But the thing is, the initial point of contact is all about that first audit.

If they can’t meet that 48-hour deadline, we’re cool with it, as long as they let us know in advance and explain why. This initial phase is mostly about learning, discovering the systems and processes.

Toward the end of the audit process, when we’re heading into that closing meeting with the C-suite folks – that’s when things can get a bit tense. The team that’s been working hand in hand with us throughout the audit, they’re right there too. And they start challenging  our observations. If the risk did not result in a security incident, they tend to question why we pointed them out.

That’s where education plays a role. We try to show them that the risk isn’t about what went wrong; it’s about what could go wrong. Now, things can get a bit heated during the reporting phase, but it usually smooths out in no time.

Aayush Ghosh Choudhary: How can internal auditors position themselves as helpful insiders?

Vignesh Kumar: It really boils down to people skills. We’re all human, after all. When it comes to the audit process, it’s not a gray area. It’s either you’ve got control or you’ve missed the mark – that’s just how it is. As auditors, we’re all about facts. We gather the data, lay out the evidence, and present it as it is. Now, when it comes to the report, it’s not about throwing someone under the bus. It’s more about how you convey the message and keep that relationship intact.

During a six-week engagement, the real distinction lies in the nuances. Everything else is about stating the facts, except for that relationship part, which isn’t something you can put a number on. That’s where things can get a bit tricky, especially for those starting out. But with experience, delivering these messages becomes smoother and more natural. It’s all about finding the right way to get the point across.

Aayush Ghosh Choudhary: What are some of the common challenges you face with stakeholders?

Vignesh Kumar: One of the big challenges I’ve come across is this lack of clear ownership. It’s like everyone agrees when you point out the number of vulnerable servers without proper patches – that’s just raw data, hard to argue against. But when it comes to who’s responsible, things get fuzzy.

For instance, take engineers managing a fleet of servers. They spot vulnerabilities and need help from the security team to address them. The hitch is, engineers are all about speed and the product. They’re not keen on disrupting their production servers for security’s sake. While security is all about, well, security. It’s a classic accountability gap.

At the end of the day, a vulnerable server means the company’s on the line. Reputation, security, potential fines – it’s all at stake.

Back in the day, it was a tussle between engineering, security, and the GRC team. Each guarding their turf. What we had to do was remind them it’s one company, one goal. I even pulled in the CFO once for a meeting, just to drive home the point: “Look, I don’t care whose problem it is. Just fix it. Our company’s security is on the line.”

Aayush Ghosh Choudhary: The idea of GRC seems to differ across the globe with different countries adhering to different regulations. So, do you think that the skills acquired as an auditor or a risk manager have a strong local context, or can they be applied across the globe?

Vignesh Kumar: Basically, it comes down to grasping the IT basics. You’ve got to understand system architecture and get a handle on those fundamental general controls.

Once you’ve got that down, it’s about making these things work efficiently in your environment – no matter what you’re dealing with, be it products, systems, or data. Whether it’s CCPA, GDPR, or any regulations, it’s like a skill set you can pack up and take anywhere.

Sure, the core competencies stay the same, but the way you apply them might shift depending on where you are. So, bottom line, these skills can be used all around the globe.

Aayush Ghosh Choudhary: What according to you is an ideal GRC solution?

Vignesh Kumar: What really matters is how much time your solution saves, right? From an industry perspective, it’s about pinpointing what we’re after. Let’s say, for instance, I need to convince our CFO that getting this certain tool is worth it. I’ve got to show them the return on investment, the cost benefit.

So, I paint a picture – implementing this tool cuts down on all these steps. That’s the time I can use elsewhere. And not just that, it hits multiple regulations. It’s a win-win for different teams – internal audit, security, you name it. Everyone benefits from putting it in place.

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Risk Grustlers EP 1 | Fancy some acronym soup, mate?

In the first episode of our podcast Risk Grustlers, we unravel the complexities of dealing with cyber risks with Davis Hake, the co-founder of Resilience, a pioneering cyber risk solution company based in New York City. 

Davis takes us through the journey of his company Resilience, which is redefining how companies think of the ‘economics’ of risk management with their innovative approach to addressing cyber risk, and the imperative need for a comprehensive understanding of risk management.

Prepare to be enlightened as he delves into the changing insurance landscape, the need for engaging buyers early on and the importance of knowing what works for your business. 

Watch the complete podcast here:

Let’s take a look at some important highlights from the illuminating podcast.

Aayush: Why don’t you tell us a little about the journey of Resilience?

Davis: Back in 2016, we kicked off our journey into Resilience. At first, we focused on supplying data to insurance companies. Our big idea was different though – we weren’t keen on just adding another security tool. What we really aimed for was to transform how cybersecurity economics worked. That’s why we got intrigued by the behavior-changing potential of the insurance industry.

Think about insurance for a moment. It’s what makes seatbelts mandatory in cars and prevents massive fires from wiping out cities – all thanks to modern safety and building standards. Cybersecurity lacks something similar. When we delved into what was truly necessary, we saw the intricate risks and threats that abound.

Our goal? Blend analytics with actionable insights to enhance cyber hygiene for companies. And guess what? This model benefits both the companies and the overall business growth when everyone evolves together.

AGC: Insurance costs are going up, and insurance companies are getting more picky about checking how well a company’s security measures are before giving them coverage. What’s causing this change? And how should companies be ready for it? 

DH: One of the big things nowadays that companies are really starting to think about is how to handle risks, especially when it comes to cybersecurity. I mean, there are a few ways to go about it. First, you can try to lower the risk by using security measures and controls. Another option is to avoid risky behavior altogether, but let’s face it, in today’s world, almost every business operates online, so you can’t completely avoid cyber risks. And then there’s the idea of transferring the risk, like getting insurance.

But here’s the catch – these strategies don’t work on their own. You can’t just dump all your cyber risk on an insurance company and forget about it, without taking any other precautions. In fact, back in 2019, we saw a real change in the cyber insurance scene. See, before that, insurance was mostly about covering the costs of data breaches and legal battles.

But then something shifted. With the rise of tactics like ransomware attacks, insurance companies started facing huge losses from paying off ransomware demands. Businesses were getting hit hard and had no choice but to pay up, even smaller ones. So, the insurance industry had to change its game. It started focusing more on not just preventing data breaches, but on helping companies become stronger in the face of these threats.

AGC: How does Resilience approach data breaches?

DH: We don’t just step in when something goes wrong. We’re there right from the start, while you’re getting your policy and even when you’re dealing with a claim. Our goal is to work together with you, to share the risk, not just pass it along. We’re like your early warning system, flagging any issues that could lead to a claim.

But it doesn’t stop there. We’re all about education too. We’ll let you know what strategies are most effective in cutting down the costs if something does happen. It’s all about understanding and tackling the unique risks your organization faces. And that’s the key to a solid cyber risk and resilience plan. That’s what we’ve seen really make a difference for our clients these days.

AGC:How should companies approach cyber risk management? 

DH: For security leaders, this whole cyber risk thing is a real puzzle. You’re dealing with ever-changing threats from human adversaries, shifting targets and industries they’re after. Then you’ve got your own industry’s regulations, various control frameworks from vendors, and insurance companies throwing their own set of questions at you.

Now, with the FCC zooming in, even senior execs at the board level are asking, “What’s our plan for this risk? How do we measure and manage it? How mature are we?” Here’s the kicker: we need to shift from a compliance-driven risk approach to a risk-driven compliance approach.

Companies need to figure out what’s crucial and impactful for delivering value to their clients. Start from there and build up your security measures, which aren’t just technical controls. It could be governance, incident response planning, training, access management policies – you name it.

Master the basics, make them second nature, and then stack up those different compliance and control frameworks. This way, you can show your board, “Hey, we’re SOC 2 compliant, and we’re on our way to nailing HIPAA compliance.” Our go-to framework for board reporting is the cybersecurity framework.

But here’s the key: if you’re just aiming to pass a SOC 2 audit, you’re kind of missing the bigger picture, you know?

AGC: What experiences led you to come up with Resilience?

DH: When we kicked off in the US government, we landed right at the dawn of our awareness about critical infrastructure and the massive cyber risks it faces. From the get-go, I’ve been all about seeing cybersecurity risk as a big picture. You know, thinking about it from an all-hazards perspective.

So, it’s not just about getting the right cybersecurity tool. It’s a whole process to lock things down, especially after we got clued into the vulnerabilities through incidents like Stuxnet. And guess what? This applies across the board – rail, food, health, education. The pandemic really hammered home how delicate supply chains can be.

But it’s not just about industries; it’s the different types of threats too. Imagine, a communication breakdown can bring a whole business crashing down. Take Colonial Pipeline, for example. Their entire operation got impacted, and bam, down went their business. 

This is what fueled our drive when we thought about launching our own company. We didn’t want to just create another gadget. We needed something that could change the game in how we handle cyber risk economics. We aimed to break those silos within a company, connecting the dots and making the whole organization more resilient against cyber risks. 

AGC: Can you walk us through your thought process as you embarked on your business venture? Did you nail down the product segment right from the start, or did you have to pivot along the way?

DH: At first, we were more focused on what our users wanted rather than what the buyers needed. Back then, we were all about creating super advanced cybersecurity analytics for insurance companies. We had these awesome cyber insurance experts who were like, “Yes, this is exactly what we’ve been waiting for!”

But, when it came to landing those bigger contracts, we hit a roadblock. Turns out, these larger businesses weren’t looking for just another cyber risk rating tool. They wanted something that would genuinely level up their operations. So, we shifted gears. We started looking at how our cyber screen analytics could improve their everyday processes. We aimed at scaling the skills of their top-notch underwriters and even thought about ways to share Social Security benefits.

That’s when things clicked. When we aligned our analytics with their real business needs, we struck gold. Those major contracts started pouring in, and it was like a sign telling us we were onto something big.

AGC: When you were trying to evangelize your early buyers, how did you make them see the problem and the need for your solution? 

DH: Coming from a politics background, I was used to making my case and rallying people behind ideas. What I quickly picked up, thanks to some amazing mentors, was that it’s less about talking and more about listening. Empathy is key, understanding the nitty-gritty problems users face every day, and then crafting a solution that seamlessly fits into the bigger business picture.

And let me tell you, soaking in that user love became my mantra. I even got to lead the customer success team for a spell. I vividly remember braving a snowstorm in February to huddle with an underwriting crew out in Connecticut. I dug deep, figuring out their pain points and learning what slowed them down.

Honestly, that’s the beauty of startup life. Engaging with these folks, truly getting a glimpse into their daily grind – it’s like striking gold. I’d come back armed with a notebook full of insights, feeling completely inspired. The best part? In just a few months, you could engineer something that drastically improved their day-to-day world. 

AGC: What advice would you give young professionals in terms of how they can break into cybersecurity and even eventually work their way up to being CISOs?

DH:The security field thrives on diversity, both in experience and skill sets. Speaking from personal experience, I came from a political science background and tinkered with computers since I was young. However, I didn’t have formal technical training until I dove into this field. It’s worth noting that in the security realm, you can’t just talk the talk. You’ve got to get your hands dirty – set up network taps, deploy firewalls – it’s crucial hands-on work.

Starting out, I’d recommend diving into courses like ethical hacking, penetration testing, and security concepts. Get hands-on experience with your own computer, setting up these tools.

Certifications are abundant in this field, keeping curious minds engaged. But there’s more to it. There should be a holistic view of risk assessment. It’s about taking those checklists and shaping them to your organization’s unique needs. Then, conveying these insights to non-technical folks is key. Move beyond the fear-driven approach and embrace empowerment.

We need to shift the perception of security from being a mere cost center to a strategic asset. It’s about leveraging security to mitigate risks, enabling growth, launching new products, expanding into new regions – you name it. As security leaders, we must bridge the gap between our vital work and the business’ revenue and operations goals. After all, whether public or private sector, we’re all accountable to citizens, customers, or clients. Ultimately, it’s about delivering value back to them.

AGC: The acronym soup has intensified. Tool fatigue is real. What would you advise mid-market CISOs to focus on?

DH: At Resilience, we work with mid-market companies, seeing two sides of the spectrum. Some are growing close to a billion in revenue, acting like large enterprises. Others hover around 300 to 500 million, facing similar compliance demands, like fintech firms or banks.

Now, whether big or small, our initial advice is universal. Step back, grasp what fuels your business daily. This isn’t just for execs; the entire team needs to sync up.

We link roles, from risk managers to CFOs, connecting expertise without breaking silos. Aligning on driving the company and customer value, we quantify setbacks like major incidents in the next few years, aided by cyber risk modeling.

The beauty? It’s not just for techies. You can discuss these risk probabilities with non-tech execs, reaching up to the boardroom. It’s about understanding your business’ risk tolerance.

Once that clicks, compliance falls into place, tailored to your business – think NYDFS standards for fintech in New York or HIPAA for telehealth, plus California Privacy Act.

And beyond standards, smart practices shine. Encrypting customer data at rest, and practicing data recovery to fight ransomware. By shortening recovery time, you cut ransom risks, keeping operations flowing despite threats.

AGC: How do new founders diving into cybersecurity navigate the landscape? What ideas should they focus on? Which problems should take the lead?

DH: In the security innovation space, a major challenge is often having a cool solution in search of a problem. Instead, trends in IT should guide problem-solving, like shifts to the cloud, evolving threats from AI, and spear phishing. 

As a founder, focusing on real problems that impassion you is key, ensuring you stay driven. Budding founders should focus on tangible, real-world issues that ignite their passion. 

Our journey as co-founders was a tale of bridging gaps. There was a gap between technology and the pressing needs of businesses to handle cyber risks. And that’s where our idea came into play – the birth of cyber resilience. 

When it comes to convincing buyers, especially in the insurance realm, relationships are gold.  We took it up a notch, personally meeting industry veterans. It wasn’t just about shaking hands; we were diving headfirst into their world, soaking up their challenges firsthand. This hands-on approach led to a whirlwind of brainstorming, prototyping, and validation.

The industry was craving this freshness. They were stuck in a bit of a legacy tech rut. Our fast-paced problem-solving hit the right notes, bringing us into the spotlight.

It’s important to hunt down those real problems that light you up, and not be afraid to make personal connections.

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Breeze through Security Questionnaires with Kai

As companies increasingly rely on cloud and SaaS to run their organizations, the exposure of business and consumer data to potential risks is heightened. To instill confidence in this transition, their strategic software partners must demonstrate they are doing all they can to protect this data. The result – crucial but tedious security questionnaires.

Being on the receiving end of security questionnaires is no fun. 

They can be quite overwhelming for any company, big or small. The sheer volume of questionnaire requests can be overwhelming for any growing organization, hindering speed and affecting strategic priorities. It doesn’t help either that the right information is distributed across teams and systems, making it an arduous task to get the answers right, on time.

What makes it worse is that most security questionnaires have ~60-90% overlap, often built on top of standard templates like CAIQ/VSAQ, meaning that the volume is just a result of duplication across various customers/partners and is avoidable.

That’s not it! Inconsistencies and errors in responding and a lack of a centralized solution for tracking and distributing the questions internally are other sets of problems that an organization may face. 

But before we jump into the resolution – let’s discover a bit more in detail about the challenges security questionnaires present for organizations. 

Security questionnaires are a painful endeavor

If you’ve ever been bogged down by security questionnaires and their time-consuming, manual completion process, you’re not alone. Ask any person filling out a security questionnaire, and they will likely say that there is something better they’d rather do with their time. 

The current approach to handling security questionnaires involves a lot of manual effort and requires the involvement of multiple team members taking attention away from strategic priorities.

This approach comes with several challenges:

1. Time-consuming procedures

Completing security questionnaires requires a significant amount of time and effort from your teams, diverting focus from core business activities. 

An average company selling to or partnering with enterprise companies will have to respond to 30-50 questionnaire responses annually, and among these questionnaires – there is around 60-90% overlap. 

Monitoring the progress of each questionnaire and coordinating across sharing them with different stakeholders and monitoring the progress of each questionnaire can be a cumbersome task.

This is further delayed by a number of factors, such as: 

  1. People need to search individual documents housed in separate drives, pdfs, excels, etc. 
  2. Oftentimes, the process also requires involves manual downloading or copy-pasting answers which is where errors might creep in or context is lost
  3. Version logs may be skipped, resulting in responses based on dated information. 

2. Outdated information causing risks

Inaccurate or outdated responses may arise from simple human errors, misinterpretations, or differing knowledge levels within the organization. Such Incorrect answers can expose your organization to several risks, impacting your business reputation and relationships with customers and prospects.

Introducing Kai

Scrut didn’t want to simply sit by and let these challenges become a roadblock for our customers – which is we decided to launch Kai. 

Kai – your AI co-pilot, is the trustworthy control partner every company needs. It is designed to provide precise guidance at the right time, ensuring that your control journey is always on the path to success.

Even though the applications of Kai are many, we decided that as an inaugural use case, we will be tackling a severe and pressing issue – automating security questionnaire responses. 

Kai for simplifying security questionnaires

Kai harnesses the power of Large Language Models (LLM), restricted to your control environment, to automatically answer complex questionnaires and streamline the entire process, saving you valuable time and resources.

The Mechanism Behind 

Here’s a simplified step-by-step overview of how Kai works:

A. Language Processing

When a new security questionnaire is received, Kai uses restricted LLM to process and understand the language, including the nuanced technical and security terminologies.

B. Source Inputs

Kai generates contextual responses based on your controls, policies, recorded version logs, and historical responses to questionnaires.

C. Automated Responses

Kai automatically generates responses for the security questionnaire, offering detailed and accurate information – in one click.

D. Review and Editing

Before submitting the responses, your team has the option to review and edit the answers, ensuring that they align perfectly with your organization’s unique context.

Why Kai?

Kai has been meticulously developed to address the pain points associated with filling up security questionnaires. 

1. Unburden yourself

By getting rid of manual questionnaire tasks, Kai frees up your team’s time and resources – to focus on bigger, better initiatives.

2. Automation, but with control

With Kai, you’re the boss of your responses. You can double-check them before hitting send, ensuring they’re solid and reliable. You maintain complete control of the responses while enjoying the sweet benefits of AI-enabled automation. 

3. Build trust like no one’s business

Continuous Control Monitoring with Scrut builds a strong security posture. Kai helps you demonstrate that. With Kai, you can rest assured that nothing’s overlooked and every response is up-to-date. 

4. Win deals faster

Faster security questionnaire responses -> better buying experience -> shorter deal cycles and higher win rates. It is that simple. 

Want to experience the power of Kai? 

Manually filling security questionnaires and diverging valuable business hours is a persistent challenge across organizations, irrespective of their size, industry, and geography. 

This is where Kai comes in. It is not just an enhancement, but a transformative solution for your organization’s security questionnaire challenges. By adopting Kai, you empower your team with unparalleled efficiency, accuracy, and confidence to focus on innovation and growth.

Ready to experience the power of Kai? Visit our website and schedule a demo today! Let us be your partner in securing your organization’s future.

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Introducing Kai: Your Ultimate Control Copilot

In the rapidly evolving landscape of business operations and regulations, the importance of effective control management cannot be overstated. Businesses face a multitude of challenges, from compliance intricacies to the ever-present risks of security breaches. 

Navigating this complex terrain demands not just expertise, but a trustworthy companion that can guide you through with precision and ease. 

That’s where “Kai,” our exciting new AI copilot, steps in.

The Birth of Kai: Bridging the Control Gap

Launching Kai wasn’t just about introducing another tech solution; it was about addressing a pressing need. As the team at Scrut delved into the challenges faced by businesses across industries, a common theme emerged: the need for a reliable partner to master control management.

And then the idea to launch Kai was born. 

The name “Kai” was chosen with great thought. Derived from the Greek word “kairos,” which signifies the opportune moment, it encapsulates the essence of our AI copilot. Kai is designed to help organizations seize the right moment to take action on their security challenges. 

Kai for Automated Security Questionnaire Responses

While the applications of Kai are diverse and boundless, we decided to tackle a pressing issue as our inaugural use case: automated security questionnaire responses. 

Security questionnaires are a standard part of vendor and partner evaluations, often demanding painstaking hours to address. But with Kai, those hours are saved, and accuracy is heightened.

Kai swiftly analyzes each questionnaire, scans across your controls, and extracts relevant information for the questionnaire. It then crafts well-informed questionnaire responses that align with your business’s security posture. 

This not only saves time but also ensures consistency and accuracy, reducing the chances of errors that can lead to more back and forth, and longer deal cycles. Kai breaks through elongated sales cycles and helps you close deals faster. 

Why Kai Shines: Beyond Automation

Kai isn’t just an automation tool; it’s a digital companion, a copilot that understands the intricacies of control management. It’s designed to bridge the gap between human expertise and technological innovation, providing insights that empower users to make informed decisions. 

The real magic of Kai lies in its adaptability and intelligence – it grows with your needs, accommodating diverse control scenarios and expanding with your journey.

With Kai as your copilot, you have a trusted partner that’s equipped to navigate control complexities, mitigate risks, and drive compliance. 

Experience the power of Kai – visit our website to learn more about how it’s reshaping the control landscape. Take control of your controls, and let Kai be your guiding light.

Authored by

Aayush Ghosh Choudhary
Co-founder & CEO at Scrut

Keeping up with cybersecurity: Must-know statistics and trends for 2023

Industry leader Gartner released Top Cybersecurity Trends for 2023 in April. It highlights the growing significance of the human element in mitigating risks and maintaining a strong cybersecurity posture for an organization. 

In the words of Richard Addiscott, Sr. Director Analyst at Gartner, “A human-centered approach to cybersecurity is essential to reduce security failures. Focusing on people in control design and implementation, as well as through business communications and cybersecurity talent management, will help to improve business-risk decisions and cybersecurity staff retention.”

In this article, we will learn about the nine cybersecurity trends predicted by Gartner that will impact security and risk management (SRM) leaders across the globe. We will also look at some of the statistics supporting these cybersecurity trends. So, buckle up. 

9 cybersecurity trends for 2023

SRM leaders must focus on the following three domains to address cybersecurity risks effectively and sustain the cybersecurity program of their organization.

  • The essential role of people for the security program’s success and sustainability
  • Technical security capabilities that provide greater visibility and responsiveness across the organization’s digital ecosystem
  • Restructuring the way the security function operates to enable agility without compromising security

The nine cybersecurity trends for 2023 that will impact SRM leaders are based on the above three domains.

Trend 1: Human-centric security design

The Hacker-Powered Security Report says that 92% of ethical hackers were able to find vulnerabilities the scanner couldn’t. 

While security automation has made significant progress, it has not yet reached a point where it can fully replace human creativity. The statistics mentioned above emphasize the ongoing need for human-centric security design in 2023 to bolster cybersecurity posture effectively.

The same report also mentions that in 2022, the hacking community found over 65,000 customer vulnerabilities. However, 50% of the hackers chose not to disclose the vulnerability they found. 

The report claims that having a vulnerability disclosure program and an impressive bounty can make your website attractive to hackers, who can then disclose the vulnerabilities they discover. 

Additionally, preparing your in-house security personnel and training them for the worst can also enhance their performance and sustain your cybersecurity program. 

CISOs should review the past mistakes made by their organization that led to cybersecurity incidents and develop future plans to reduce risks. 

They should pivot the controls to more human-centric approaches to reduce the burden on employees to ensure greater security.

Trend 2: Enhancing people management for security program sustainability

Gartner predicts that by 2026, 60% of organizations will shift from external hiring to quiet hiring, i.e., hiring from internal talent pools to address cybersecurity and recruitment challenges.

Organizations have tended to prioritize adopting newer technologies over investing in comprehensive employee training. However, for optimal results, a perfect balance should be struck between introducing advanced technologies and providing continuous employee training. CISOs who have focused on both areas have seen improvements in their functional and technical maturity. 

Did you know that according to Verizon, 82% of breaches involved a human element in 2022? Whether it is the use of stolen credentials, phishing, misuse, or simply an error, people continue to play a very large role in incidents and breaches alike.

In 2023, SRM leaders would have no option but to train and retain their employees. Cybersecurity training is an inevitable part of business management in the coming years. 

Trend 3: Transforming the cybersecurity operating model to support value creation

Cybersecurity is not just an IT function but should be treated as a business enabler. It should not be siloed but should be woven into the fabric of the organization. 

Each and every act performed by employees should be designed, considering the cybersecurity of the organization in mind. Following are the ways in which an organization can weave cybersecurity into regular business operations:

  • Develop a security-conscious culture throughout the organization by promoting awareness, education, and training programs. It has been observed by IBM that with the right employee training, the cost of a data breach can be reduced by $247,758. 

PWC noted that 46% of companies increased engagement of CEO in cybersecurity matters in 2022, and 43% increased employee report rate on phishing tests as a part of instilling a cybersecurity culture in the organization. 

  • Integrate security considerations early in the development lifecycle of products, services, and processes. Implement a “security by design” approach, where security features are built into the design and architecture rather than being added as an afterthought. 

PWC also found that 43% of the organizations increased the number of cyber and privacy assessments before project implementation in 2022. This trend will continue in 2023. 

  • Identify and understand the business objectives and priorities of the organization. Determine how cybersecurity can contribute to achieving those objectives, such as protecting customer data, preserving brand reputation, or ensuring regulatory compliance. 

According to PWC, 42% of the organizations increased alignment of cyber strategy to business strategy in 2022. 

  • Develop metrics and key performance indicators (KPIs) that align with business objectives and demonstrate the value of cybersecurity initiatives. Regularly report on the effectiveness and impact of cybersecurity efforts to senior management and stakeholders. 

Cybersecurity leaders should use less technical jargon while communicating with management to help them understand the issues better. World Economic Forum reported that 17% of security executives are concerned about the level of cyber resilience in their businesses.

Trend 4: Threat exposure management

Threat exposure management relates to attack surface management. Attack surface refers to all the points from which a cybercriminal can enter the network of an organization. 

The Hacker-Powered Security Report describes the attack resistance gap as the gap between what organizations are able to protect and what they need to protect.

The main factors contributing to this gap are incomplete knowledge of digital assets, insufficient testing, and a shortage of the right skills.

CISOs need to adapt their assessment approaches to gain insights into their vulnerability to threats through the implementation of Continuous Threat Exposure Management (CTEM) initiatives. 

CTEM initiatives refer to the proactive and ongoing efforts taken by organizations to continuously assess, understand, and manage their exposure to threats. 

CTEM programs focus on real-time monitoring, analysis, and response to evolving threats and vulnerabilities.

“CISOs must continually refine their threat assessment practices to keep up with their organization’s evolving work practices, using a CTEM approach to evaluate more than just technology vulnerabilities,” said Addiscott.

Trend 5: Identity fabric immunity

Vulnerabilities in an organization’s network are caused by incomplete or misconfigured elements in the identity fabric. 

IBM reported that organizations with strong Identity and Access Management (IAM) saved $224,396 at the time of a data breach in 2022.

IAM is a framework or set of processes, policies, and technologies designed to manage and control user identities, their authentication, and their access to resources within an organization’s IT environment. 

It focuses on ensuring appropriate access to systems, applications, data, and other digital assets while mitigating the risk of unauthorized access or data breaches.

Key components of IAM typically include user provisioning, authentication mechanisms (such as passwords, multi-factor authentication, or biometrics), access control policies, identity lifecycle management, role-based access control, and centralized identity repositories.

 IAM solutions help organizations enforce security policies, streamline user management, and ensure compliance with regulations.

Trend 6: Cybersecurity validation

Cybersecurity validation brings together the techniques, processes, and tools used to validate how potential attackers exploit an identified threat exposure. 

The tools utilized for cybersecurity validation are advancing considerably in automating repetitive and foreseeable elements of assessments. This advancement facilitates frequent evaluations of attack techniques, security controls, and processes, allowing for consistent benchmarking.

After a survey, Deloitte reported that compared to 53% in 2021, 76% of respondents reported using automated behavior-analytic tools to detect and mitigate potential cyber risk indicators among employees. 

It indicates that more and more organizations are leaning towards artificial intelligence (AI) and machine learning (ML) tools to carry out mundane tasks as well as analytical tasks to get better results. This trend will continue in the future. 

Trend 7: Cybersecurity platform consolidation

Vendors of cybersecurity, compliance, and related activities are consolidating more services under their domains. So, organizations should verify whether there are any overlaps of the services and whether they are paying multiple times for the same service. 

For example, governance may be offered by the same vendor offering compliance services and cybersecurity services. It is crucial for SRM leaders to reduce redundancy across the organization to save precious resources. 

Moreover, as organizations have to deal with fewer vendors in the future, they will have to vet fewer of them. 

There is a difference between the behavior of trust leaders in the market and other organizations. While 75% of the trust leaders vet third-party personnel and/or vendors prior to using their AI platforms and/or services, only 34% of the other organizations do so, making them more vulnerable to cyberattacks (McKinsey). 

Vendor assessment is one of the crucial aspects of cybersecurity and compliance. Without vendor risk assessment, you might fall prey to a cyber attack. 

Trend 8: Composable businesses need composable security

To keep up with the rapidly evolving business landscape, organizations need to shift away from dependence on monolithic systems and instead focus on developing modular capabilities in their applications. 

Composable security is an approach that involves integrating cybersecurity controls into architectural patterns and applying them at a modular level within composable technology implementations.

Gartner predicts that by 2027, more than 50% of core business applications will be built using composable architecture, requiring a new approach to securing those applications.  

“Composable security is designed to protect composable business,” said Addiscott. “The creation of applications with composable components introduces undiscovered dependencies. For CISOs, this is a significant opportunity to embed privacy and security by design by creating component-based, reusable security control objects.”

Trend 9: Boards expand their competency in cybersecurity oversight

PWC found that some of the organizations with the best cybersecurity outcomes over the past two years are 14 times more likely to provide significant CEO support across all categories of issues.

 Also, their data showed that in 2022, 42% of organizations increased their assessment of board understanding of cyber matters, and 43% increased the time allotted for discussion of cybersecurity at board meetings.

The above figures show two things: (1) CEO support can improve cybersecurity outcomes

 (2) Organizations are moving towards higher dependence on CEO support. 

Executives in most regions and industries opined that the most important activity for a more secure digital environment by 2030 is educating CEOs and board members to help them fulfill their duties and responsibilities. 

Moreover, the board’s growing emphasis on cybersecurity arises from the shift towards clear accountability for cybersecurity, which includes augmented responsibilities for board members in their governance duties. 

Cybersecurity leaders are required to furnish boards with reports showcasing the influence of cybersecurity programs on the organization’s goals and objectives.

Final thoughts

The release of Gartner’s Top Cybersecurity Trends for 2023 highlights the increasing importance of the human element in cybersecurity and the need for a human-centered approach to mitigate risks and maintain a strong cybersecurity posture. 

As stated by Richard Addiscott from Gartner, focusing on people in control design, implementation, communication, and talent management can improve business-risk decisions and cybersecurity staff retention.

The article explores the nine cybersecurity trends predicted by Gartner, which will impact security and risk management leaders worldwide. These trends revolve around three key domains: the role of people in security program success, technical security capabilities for greater visibility and responsiveness, and restructuring the security function for agility without compromising security.

Each trend is supported by relevant statistics and insights. From the importance of human creativity in security design to the need for comprehensive training and retention of employees, the trends highlight the evolving landscape of cybersecurity and the strategies organizations must adopt to stay resilient.

Overall, Gartner’s cybersecurity trends for 2023 provide valuable insights for security and risk management leaders, emphasizing the significance of the human factor, proactive measures, and adaptive approaches to address emerging threats and protect organizations in an increasingly digital world.