Part 2: Why does Culture Hacking Matter?

Photo Credit: Photo by Tim Mossholder on Unsplash

Failure in this system is like a cliff in the dark, a precipice at night that we can’t see until it is too late and we are about to tumble over it. We’re afraid of it, waiting for us out there in the darkness, and all we know is that we never want to get too close in our wanderings.

In the last post, I explained the origin of culture hacking (system thinking) and had a bunch of software analogies. In this short post, I will try and make the case of why culture hacking isn’t just a gimmick used by start-ups to increase share prices in ping-pong tables.

Meme of boy with trumpet following gir who has blocked her ears, the boy is labeled "Me taking to people about security culture", the girl is labeled "Everyone else".

I’ll be honest, I’ve re-written this post more than once, I think right now I’m up to the fourth time, and it doesn’t look like the post I started with, although in this, I’ve summed up the two key reasons I think culture hacking matters:

  • If you’re not hacking the company culture, there is a high likelihood someone else is, and chances are they don’t have your interests at heart.
  • By continuously hacking, and improving the culture of the company, you can increase resilience to external influence.

Now when I say culture hacking – I do mean security culture hacking. What we can do to improve the perception and uptake of security initiatives, but I think a lot of this is relevant to other parts of the business.

If you talk to me about anything security-related, I will bring it back to culture – when I did pen testing, I always tried to speak with empathy. I will also be the first to admit some days after seeing the nth + 1 web app with the same problem. I may have had a grumble-grumble. But the reality is being a security person is hard.

I still remember when I was still fresh, I wanted to see some movement on a ticket we had, had open for a while to implement a Content Security Policy – easy enough, right?

A table outlining ease of implementation vs security benefit, the row if for content security policy which is labeled as hard to implement with a high security benefit.

Anyway, at the time we were using one of those flashy frameworks that allowed you to make a web application and package it as an app, so it was even better that I didn’t have to come up with more than one variation.

Anyway, I started my research, cataloguing the assets in use so I could add them to the allow list, except… a few required wildcards... Enter rabbit warren one… it turns out, that no we wouldn’t change those. Okay, well that’s okay… so off I go, and then I noticed that we had inline scripts and functions that needed unsafe-eval… and the story goes on.

This “simple” ticket so scarred me I almost did a talk on it.

What’s your point Bec? Well, my point is, even on a good day where you don’t have other factors pressing in on you, security is challenging. But if we take the example of the developer, we can identify various other issues that may result in an inadequate security outcome:

A person using the computer there are lots of different things pressing in on them, like deadlines, security, feature requests.

With all of these external pressures imposing, which do you prioritise? Well, that depends on what the organisation demonstrate their values to be. If a developer has seen or heard of other people get reprimanded for prioritising x over y, chances are developers won’t make that decision for fear of reprimand.

On the flip side, co-workers and direct reports will see our developers decisions and emulate them.

If you’re not hacking the company culture, who is?

Three sprints have passed and each time security has been put on the back burner to feature requests and project deadlines. Each time there is a risk of introducing more security bugs. Each sprint we move closer to the mandatory penetration test. As many security researchers and penetration testers may know when you issue that 30 finding report, that’s when it gets bad.

But of course, there is no right or wrong answer about what should be the priorities. Instead what we should be doing is looking for small areas of exploitation, or areas we can hack that we can use to ensure that the person in our example feels like they can make the right decisions at the right time.

Depending on the business and outcomes they seek to achieve this might be:

  • Having the DevSec, Dev or Sec teamwork on some security-focused pipeline improvements, so security is always part of the cycle.
  • Management can consider adding additional time to the project to meet project/security objectives and ensure developers aren’t working excessive overtime.
  • One on ones can be used to ensure that developers have adequate time to get feedback to inform what they should expect from a performance (360) review or manage their expectations of themselves.
  • Regular training and education in technical and living well skills to ensure developers can balance their work and personal lives.

Each of these recommendations can be further broken down into a smaller “culture hacks” that would release some of the pressure a developer faces when faced with conflicting priorities.

By continuing to make these small changes that improve people’s opinions on security culture and general wellbeing at your organisation, you’re also helping to make your culture more resilient to external influence. People can act as the cultural anti-bodies that feel safe calling out behaviour that doesn’t align with company culture and values.

Of course, keep in mind that in the same way, positive change can make a business more resilient to negative change. The inverse is also true.


  • Ask yourself, if you’re not hacking the company culture, who is?
  • Security isn’t just a technical challenge; there is a significant portion that is a human/societal challenge;
  • Culture can be a significant indicator of how well a security program will work in an organisation.
  • By continuing to improve company culture, you can make your company resilient to change.

Part 1: Culture Hacking

Photo Credit: Photo by Code💻 Ninja⚡ on Unsplash

Culture hacking has its roots in the concept of system thinking, developed by the Systems Dynamic Group in 1956. The general idea is that a system will act differently when isolated from the system's environment or other parts of the system.

In this analogy, the system is an organisation, and the organisation's culture is like an operating system.

But when we say organisational culture, we aren't talking about the values posted on the "About Us" page of the website, or what managers tell us during induction sessions. Instead, when we talk about organisational culture, we are talking about how people act and the values they embody day to day.

And while culture hacking may seem like the latest way businesses have attempted to improve company culture, the concept has been around a lot longer than Google Trend analyser, even if not under the name of culture hacking.

So, what exactly is Culture Hacking?

Simply put, culture hacking is a form of social engineering. Using more analogies culture can also be seen as the software of the mind.

So we have a lot of software at this point, the way I like to frame it, is people are like the modules that can be dynamically linked into the kernel (the organisation).

But this is also, not a new idea, and it was proposed by Gerard Hofstede, a Dutch social psychologist and IBM employee.

So if we stick with the culture is software, and the idea is that a system will act differently when isolated from the environment or other parts of the system, we can see how organisations can influence and change how we think, and how we can change organisations from the inside out.

But of course, it's a lot easier said than done; some organisational culture is more challenging to change than others.

Photo of a dumpster fire, the dumpster is labelled organisational culture.

Culture hacking aims to exploit a single area of an organisation, where the culture is already susceptible to change. Much like regular hacking, you don't try and breach the strongest part of the organisation because you're likely to come up against more resistance.

And just like regular hacking and social engineering, it can be used by people who want to see positive change, and it can be used by people who want to see the world burn.

But hacking culture isn't the same as hacking computers. You can't just cycle through the options until you get the flag as I go through and quality assurance phase of this blog post, I realised you 100% could do that, and that is literally what voice phishing is.

Fallout game, with dialogue selections

Regardless, you can't just restore your last save if you decide the outcome of a particular choice wasn't quite what you were after. I DuckDuckGo' ed this to make sure.

You might also be thinking that the idea of hacking a person's belief system is a little gross and intrusive - and you're right if the wrong person drives a culture hacking initiative the outcome can result in a more toxic workplace.

So as part of a good culture hacking problem, you should borrow the following considerations from system thinking when making decisions about how you want to change the culture of your business:

  • Consider long and short-term consequences of actions;
  • Recognise there might be unintended consequences to our actions;
  • Identify the circular nature of complex cause and effect relationships; and
  • Look at things from different angles and perspectives.

Photo of a wall with two "Black Lives Matter" posters Photo Credit: Photo by Tilda Foletta on Unsplash

So while culture hackers traditionally came from the world of activism, fashion and art, shaping the way we see the world. Organisations have co-opted this idea to try to make themselves less business-like, renaming HR to "Human Capital", "People Operations" or "People and Change" they are trying to sell you their cool organisational culture. It might involve beer on tap and ping pong tables, at least when we had offices.

You might be feeling a little cynical about culture, and what organisations have to say about culture, and me explaining culture hacking may not have put these fears to rest. Still, over the next few posts, I want to focus on why we should care about culture hacking, how we can do a risk assessment of our culture and ultimately influence it to be more security positive.


  • Culture hacking is a form of social engineering and can be leveraged by anyone.
  • If the wrong person drives a culture hacking initiative, the outcome can result in a more toxic workplace.
  • We should make sure before changing or exploiting these culture weaknesses we consider long and short-term consequences and look at things from different perspectives.

Note: In the true essence of continuous improvement, this blog post has been re-written to improve readability.

A Review of Practice Cloud Security or the First Book I Finished this Year

Photo Credit: Photo by Szabo Viktor on Unsplash

  • Practical Cloud Security: A Guide for Secure Design and Deployment
  • Author: Chris Dotson
  • Pages: 196
  • ISBN10: 1492037516
  • ISBN13: 9781492037514

I haven’t worked out if "Practical Cloud Security” is just that engaging that I managed to finish it in just a couple of days, or if it is on the shorter end of the scale for technical books, I’m not complaining though because I actually really enjoyed this book.*

Each chapter provides a breakdown of a key area including: Cloud Asset Management and Protection, Identity and Access Management, Vulnerability Management, Network Security and Incident Response, to name most but not all of them.

Admittedly one of my favourite sections was on Tagging Cloud Resources, because when doing configuration reviews, it’s something I rarely see done, but think a lot of companies could take advantage and benefit from of especially when dealing with a shared environment and asset management.

It also had some great out of the box metrics in the vulnerability management chapter, are they good for mature businesses that have a good handle on their cloud environment? Debatable, but I think if you are getting started with metrics or looking for a way to monitor how successful your patch and vulnerability management program is, they could provide a good starting point.

One of the other qualities that sets this book apart from a lot of others, is the addition of referencing back to how certain concepts are done when leveraging on-premise infrastructure. I was probably a little later to the cloud game, and some of the more abstract concepts like Kubernetes Pods (not covered directly in this book) I do much better when I can relate it back to a concept I know much better i.e. on-premise, so I really appreciated that.

I think this and it’s focus on the practical part of cloud security make it a really good option for those that are familiar with cloud (or really even new) but need like a too long didn’t read version? It doesn’t focus on the finer details on implementation, and so you’d probably want to follow up on a more in-depth cloud security book that gets more into the detail, I mean you could also consider reading the Centre for Information Security $CLOUD benchmark but that will probably overwhelm you with details you just don’t need in your life right now. Let’s keep it practical.

The book also remains very agnostic but does provide the name of the service (if relevant) for the three major cloud providers (and IBM cloud).

Admittedly if I had one complaint, it would have to be I didn’t like the risk section, and honestly, that could just come down to semantics and how I define risk as opposed to the author, but with that said, it’s good chapter and will benefit those looking to understand the general topic.

Overall, I think this is an informative book to have a read through for almost anyone involved in cloud security and is looking for a primer, overview, or refresher. Because it is on the shorted end you can approach it in a more read end-to-end way, I sort of skimmed over the areas I felt like I had a good handle on, but make sure you keep a highlighter and stick notes on hand because there are some great concepts in there you may want to revisit.

I rate it 4 out of 5 clouds.

Note: After putting together the details of the book, I found out that at 196 pages it is probably on the shorter end of technical books.

Satisfying Clause 4: Context of the Organisation

Photo Credit: Photo by Atul Vinayak on Unsplash

This is the second time I am writing this because bad habits die hard, and Word didn’t save my first draft, because I never enabled that feature so who’s really at fault here? Obviously, the computer.

So sei es.

Anyway, as you’ll recall in my previous blog post I listed a few key things I was going to focus on, I also started using an ISO27001 Implementation project plan, and I’m already behind. Luckily, the CEO and head of the implementation committee is willing to turn a blind eye in exchange for more cat snacks.

This weeks focus was on the following areas:

  • Determine the needs and expectations of interested parties (4.2).
  • Review the purpose, vision, and mission with reference to interested parties (4.1).
  • Conduct a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis (4.1).
  • Sketch out the ISMS and document as I go along (4.4).
  • Determine the scope of the ISMS (4.3).

I approached them in this order because I felt it made more sense to identify the needs and expectations of interested parties, rather than defining the purpose, vision and mission of the ISMS and then validate it with the interested parties.

As well, I see it as the approach that encourages cross-team discussion, and results in more open communication, because instead of asking people to validate what you see as the needs and expectations you get the answer straight from the people who will be working within the confines of the ISMS.

This is an important consideration to me because security and information technology teams are often very technology and control focused, not always, but often looking for ways to prevent their users doing a “Bad Thing™” – or they have a strong focus on technology/cyber based risk rather than widening that lens to include other internal and external factors.

If you disagree, that’s okay, it may be that you have a tech/security team that is well educated and informed about business needs. That could come from culture, past experience, there are a lot of reasons, unfortunately that’s not the case for everyone.

(Ctrl + S)

Another key point is, you (the ISMS implementer) cannot be expected to take every single consideration into account when speaking to the interested parties, and that’s something that should be made clear, i.e. “We are taking what you say on board but we are also trying to balance other obligations so not everything will be actually implemented.”.

The good thing is a proper ISMS should undergo continuous improvement, which means that while some of these pressure points may not be resolved in the first year, they can be reviewed yearly, and frankly where possible, I think the interested party should be updated on any changes, because the that time, their opinion may have changed and the ISMS may have solved the problem! I’m sure internally that got some “yeah right” and “wishful thinking” thoughts or mutters which again is fine.

You’re working within the confines of a company that probably has an established culture, and changes to culture can be some of the hardest. In these cases, the key isn’t to change the system over night, it’s to chip away at the problem. For example:

You’ve had an established ISMS and people continuously moan and groan about how it’s difficult and security theatre, why not book some time with some key end-users, have a virtual coffee and talk about it, have some open ended questions, make sure they know that none of what they say will be brought back to them.

Just an idea though, make it work for your style, and company culture.

But back to the more pressing matter, clause 4.2 – determining the needs and expectations of interested parties. I have four interested parties:

  • One internal user
  • One external user
  • The cat
  • Myself

Now because this is an informal “organisation”, or “business” if you can call it that, I started with a Teams message (yes, we run Teams, yes, we have a Governance channel, yes, I run this as close to a proper implementation as I can):

As most people know, I'm being a shithead and rolling out an ISMS at home, you've been invited here today because chances are you'll be impacted by it. If you have any risks or opportunities you'd like to see addressed please let me know over the next couple of days in a thread.

I got Z E R O messages. N O N E. Sweet. So I went and annoyed my first internal user in person, the conversation went like this:

“What are your needs or concerns about implementing ISMS?” “What does that mean?” “Well, what are your hopes and dreams for being more secure, what are your biggest fears about having to work in an ISMS?”

Insert 20-minute conversation about controls here

And folks, this is the conversation that enforced to me, that people don’t always realise that an ISMS is broader than the controls they deal with on a day to day basis. Some of the key take aways from that conversation were also focused on the negative:

  • I don’t want changes to take forever, and be restricted to 12pm to 5am;
  • I want to be able to quickly spin up infrastructure/software for testing and not have to go through the ringer.

All fair points, the cat’s opinion was pretty standard:

  • You can do what you like as long as I get snacks and pats.

I’m yet to reach out to our one external user, but as the system is likely to cause too much of an impact to them at this point, I will probably defer their interview until after the initial implementation.

The thing that struck me as interesting, but not unexpected is the strong correlation between strong controls and poor performance – it suggests to me that previously organisations users worked for often found controls, processes and policies restrictive because the organisation was overzealous in implementing hard-controls (technical) or they just weren’t consulted or considered as part of the original interested parties.

While I had less than an A4 page in notes from my two users, I felt like I had a lot more information by also assessing what they didn’t say, which was anything astoundingly positive.

So now I can consider the purpose, vision, and mission with reference to interested parties, because I know what the expectation of these parties is!

The first time I did this I used a brand compass to help determine the purpose, vision and mission of my “organisation” but it wasn’t really helpful so instead I went with a why (purpose), what (vision), how (mission) to hopefully flesh this out a bit better.

So, what are we asking:

  • Why: To improve our security posture which will increase peace of mind.
  • What: By implementing what we understand to be best practise.
  • How: In a practical, measurable, repeatable way. So our missions statement for the ISMS is basically:

To improve out security posture which will increase peace of mind by implementing what we understand to be best practise in a practical, measurable, repeatable way.

Some other questions I asked myself this time around:

  • Why am I going to an implement an ISMS?
    • Consider here internal and external obligations
      • Laws and Regulations – It helps meeting regulations easier.
      • Standards that might apply to your industry/sector
      • What the market expects of you (contractual obligations, this can also include what consumers (if you’re a services-based company) expect of you)
      • Internal policies (Code of Conduct, Occupational Health and Safety)
  • How will I get there?
    • Are there organisational cultural barriers?
    • Does the organisation have the capabilities to meet the requirements? Not just fiscally but also resource and skill wise.
  • What will it be like when I arrive?
    • How will we make security part of everyone’s job?
    • How will we support morale through this process?
    • What values do I want us to embody (transparency, blameless culture, learning)
  • Who will I see when I arrive? (An auditor lol)
    • Do we need more people? What sort of roles would need to be filled?
    • Are we going to get an external audit done?

I don’t think there are any questions that are set in stone, and you might have some better variations you like to ask, these are just some I thought of while going through this process.

So now I know why I am doing this, (besides just wanting to) I did a Strengths, Weaknesses, Opportunities, and Threats analysis – this isn’t mandatory, but I found it helped identify where there might be some problems in implementation:


  • Skills in governance, risk, and information technology implementation/configuration;
  • Small “business” – we can make decisions quickly, rollout solutions quickly and roll them back quickly;
  • Strong networking capabilities;
  • Lack of compliance requirements.


  • Lacking skills in configuring security information and event management (SIEM) solutions;
  • More infrastructure, means more maintenance;
  • Can’t do in-depth software risk assessments;
  • Small budget.


  • Become more familiar with implementing/managing/configuring open source solutions;
  • Can add more canary tokens to the environment (maybe even budget for a hardware based version);
  • Capabilities to build out a secure software development lifecycle;
  • Speed up the time, accuracy and security of deploying and edge services.


  • Unfamiliarity with solutions may result in misconfiguration and widened attack surface;
  • Identity thieves;
  • No cyber insurance.

Next up, clause 4.4 – work out what I want the ISMS to look like. In most cases people would probably take a business or process approach, I’m more interested in taking an iterative approach and using SCRUM.

I decided I wanted to do it this way before I even really started, because I value the core pillars of SCRUM, transparency, inspection and adaptation, and really they align quite well with ISO27001 if you think about it.

Transparency especially I think, makes for good security culture, and it ties into “it’s okay to make mistakes as long as you own up to them” which in turn helps encourage blameless culture which is important to help people grow and improve.

SCRUM also encourages cross-functional teams which help encourage personal relationships, promotes better communication, and in turn better co-operation. By removing silos, we basically have a better ability to see what others have to work with and hopefully develop some understanding and empathy.

I’ve read some literature online about how implementing SCRUM and ISO can go, and while I have seen some variations address some of the earlier clauses (particularly clause 5 leadership) this way, I don’t like it. Even in SCRUM you have a clear owner, and I personally think, that it is their role to interface with the business to satisfy this clause. Basically it is up to the product owner to understand the business and have a vision for the team.

But I digress, I’m here to tell you why I made my choice and not necessarily justify how it maps back. Basically it comes down to this, I see more value in implementing ISO using a cross functional team because the requirements, will ultimately be more precise, and aligned to the existing business processes and culture.

As we go through the clauses I may address this in more detail.

Finally, scope. Clause 4.3 determine the scope of the ISMS! I started listing out the areas that pose the most risk to us, and listing out the areas that we have little to no control over.

We’ve excluded a lot of the human resources parts, however, at least informally I might decide to take a look at awareness training and how we provision access across all our different solutions, primarily to make accessing different services easier.

Further to this physical and environmental security is out of scope. I’m not pricing and having HVAC installed to maintain a handful of on-prem servers. They have a UPS anything beyond that is in the hands of the gods.

Software development is out of scope, at least for now, I want to add it in later but right now it’s not a key part of the systems at home, and they are managed at least informally.

I ended up with this visual representation of the scope which focuses on “departments” rather than processes because right now we don’t really have many of those.

A visual ISMS scope that defines the different areas of the business in scope.

Which I translated into the following:

The scope of the ISMS includes the deployment, configuration and management of infrastructure and network-based software solutions at the head office, excluding physical, software development and research and development aspects. Risk and information security remain fully in scope, as documented in the statement of applicability version 1.0.

So right now, I’m pretty satisfied that I’ve closed off clause 4 for now, the next phase for me will be working on the risk and more generally planning – clause 6. From here I’ll probably jump around a little back to clause 5 – Leadership. My rational being, if I can demonstrate even some of the fundamental risks posed I’ll get better buy in (even though I have bought buy in already, it was on special at Aldi last Friday), and leadership will be committed and accountable for the success of the project.

Focus for the Coming Week

  • Define and document roles and responsibilities (5.3)
  • High-level risk assessment (6.1)
  • Determine and document the ISMS objectives (5.1)
  • Present the case for ISMS to Leadership (5.1)
  • Develop an Information Security Policy (5.2)

What I Am Reading

  • PRAGMATIC Security Metrics by W. Krag Brotby, Gary Hinson (ISBN: 9781439881538)
  • People-Centric Security: Transforming Your Enterprise Security Culture (ISBN: 9780071846790)

Hello World! 🌏

Photo Credit: Photo by Luis Villasmil on Unsplash

So, recently, no scratch that, for a while I’ve been considering implementing an information security management system (ISMS) at home. People have asked why, and the answer comes down to one of two reasons:

  • It sounds like fun; and,
  • We run the infrastructure of a small to medium business and it makes sense to have something to make sure we are following best practise.

Of course, though, I have the luxury of looking at things through a pure information technology lens. After all, I have no personally identifiable information (PII) to protect, products to manage, or governors to report to, my budget is whatever I’m willing to spend and my biggest problem is procrastination, my steering committee it my cat Lyla and the main question the change advisory board has is “when do I want to stream things?”.

Do I have things easy? Absolutely, but if you know me, it means this project is going to be nothing but.

Considering my budget is whatever I’m willing to spend, which is very little – gotta support that plant habit somehow – I decided to deploy the community edition of Eramba an open source Governance, Risk and Compliance solution. This version gets (hopefully) one update per year.

They also have an enterprise edition available for 2500 Euro per year, which at the time of writing is about $4,124.10 and that gets an update roughly every month.

You can find out more on their site, but I’ve had this application deployed for less than 24 hours so we’ll see how well it goes. Until then deployment was easy as I handed it over to our sys admin who sorted out all my ESXi compatibility issues and got it deployed.

So what are my next steps, I have the software, I have the standards and so it’s time to begin working through the mandatory clauses 4 – 10!

Over the next week I’ll be looking specifically at clause 4, context of the organisation, this will involve me:

  • 4.1 - Understanding internal and external factors that might affect the ISMS and the outcomes I hope to achieve - this will help me work out the purpose of the ISMS, work out how to manage it, and allocate resources.

More importantly to this point and the other parts of this clause it will help me define a scope and therefore what controls from Annex A are needed… and I feel like this is where people get tripped up when rolling out ISO27001, at least, this is where I feel it is after having spoken to a lot of people who have had to interact with or work within the constraints of an ISMS.

Basically, it comes down to a lack of understanding about risk and when we talk about security and risk it tends to be in a, dare I say it, cyber context rather than an information context and so IT is seen as the driver rather than literally every other part of the business. So you end up with these ill-fitting governance programs because they are driven by IT and/or security which makes people see governance as a business drag rather than enabler, but I digress. Clause 4 other activities:

  • 4.2 - Understanding the needs and expectations of interested parties which basically boils down to, where possible am I practicing what I preach;
  • 4.3 - Determining the scope -- which is in reality going to be most of our systems, however, one of the things I want to investigate is, if there is an approachable way to focusing on particular areas first and increasing the scope in iterations. For this purpose I’ve broken things down into particular business units i.e. engineering, radio, WinTel etc. This is still a bit of a work in progress as I try and workout how to slice up all the different areas a house of tech people covers;
  • 4.4 – Determine what our ISMS looks like because it’s not a one-size fits all problem.

To be frank, as I typed all of this out, I wondered to myself “why bother” you probably have most of the answers to these questions, but I feel like that’s the key problem without proper consideration, I’ll create more work for myself and develop a system that doesn’t work for me.

My next steps over the following week:

  • Determine my needs and expectations of interested parties (4.2)
  • Review the purpose, vision, and mission with reference to interested parties (4.1)
  • Conduct a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis (4.1).
  • Sketch out my ISMS and document as I go along (4.4).
  • Determine the scope of the ISMS (4.3).

Key Terminology

  • Information Security Management System: a structured and systematic approach to managing company information.
  • Personally Identifiable Information: Any data that could potentially be used to identify a particular person. Examples include a full name, driver's license number, bank account number, passport number, and email address.
  • Steering Committee: A committee that decides on the priorities or order of business of an organisation and manages the general course of its operations.
  • Change Advisory Board: Delivers support to a change-management team by advising on requested changes, assisting in the assessment and prioritisation of changes.
  • ESXi: Hypervisor developed by VMware for deploying and serving virtual computers.