Category Archives: Security

Cloud Fuels Disruption in Security Market

Cybersecurity

Cloud computing is having a major impact on all other areas of IT and delivering generally profound changes to the business world. Here’s a look at how the security field is evolving to embrace the cloud in 2016.

  • Malware Protection
  • Use of Firewalls
  • Load Balancing
  • Encrypting
  • Switching
  • App-Based Storage
  • Conclusion

The security industry is rapidly changing, with firewall and switching companies fading away to make room for solutions more directly geared toward the cloud. On the other hand, there are certain types of security firms that will continue to grow as the landscape shifts increasingly from physical to virtual.

Malware Protection

Anti-malware companies have expertise related to security, but their focus has traditionally been on in-house systems. Now that the cloud is becoming so central to computing, malicious parties are turning to those systems as points of entry for attack. In 2016, anti-malware firms will further invest in the development and introduction of cloud-specific tools.

The services that will be used are fundamentally similar since the basic idea is to check traffic for possible malware injections. One challenging aspect is interoperability, notes TechCrunch – “how the anti-malware solution gets inserted into a cloud system to which it doesn’t necessarily have access.”

This year, cloud infrastructure-as-a-service providers will be allowing people to use more malware options with their systems.

RELATED: As companies explore public cloud, they are realizing it’s important to look beyond brand recognition and price to the actual technologies and design principles that are used. In other words, what defines a strong IaaS service? At Superb Internet, we offer distributed rather than centralized storage (for no single point of failure) and InfiniBand rather than 10 GigE (for dozens of times lower latency).

Use of Firewalls

The unfortunate news for firewall providers is that their market is taking a huge hit with the emergence of cloud since access control is now being handled externally.

Firewalls determine the extent to which communication between certain systems is allowed and which protocols are acceptable. These systems have typically been IP- focused. Services such as packet monitoring and app awareness will still be needed, but access control is handled as part of the cloud service.

Load Balancing

Load balancing is a critical part of computing, but the companies specializing in this area will also become less prominent in 2016. Load balancing spreads workloads evenly across machines, a characteristic that is built into the cloud model and seen as one of its primary strengths.

Load balancing will still make sense with some legacy systems.

Encrypting

With traditional systems, encryption was an afterthought for many scenarios. In 2016, as the cloud blossoms, so will encryption – which now has a more pivotal role. However, adaptation to cloud is key.

“Traditional agent-based encryption is … hard to deploy because it doesn’t work seamlessly with data management and other infrastructure functions,” notes TechCrunch. “[E]ncryption vendors need to develop solutions that are massively scalable and truly transparent.”

Encryption will be built into many cloud systems in 2016. Independent encryption tools will also become more prevalent. Encryption could eventually become a more comprehensive strategy to safeguard networks via access control, alongside its role in shielding the data.

Encryption could gradually become the new “ground zero” for security.

Switching

Switching solutions are sophisticated tools, with capabilities such as establishment of a virtual local area network (VLAN). Typically switching systems designate which servers within a data center can and can’t interact. Within a network management context, switching becomes a much more elaborate undertaking.

With cloud, you no longer have to worry about network management in that sense. You can establish parameters through which switching occurs automatically. Network access control becomes a non-issue.

You do want to have switches so that you can have a single network supported by more than one infrastructure, but that’s not huge business. The business of switching will therefore be in decline in 2016.

The problems of switching companies are amplified by the challenge of cloud integration. “To get a so-called virtual switch inserted in a cloud-based data center, it would need to be tightly integrated with a cloud-based hypervisor,” says TechCrunch. “[There is] no incentive for cloud providers to give third-party switch vendors special access to their systems.”

App-Based Storage

Data is expanding astronomically, and the cloud gives enterprises someplace to immediately store all that extra information. That gives rise, in turn, to storage through applications.

The companies that will be the most successful with these cloud storage solutions are ones that will allow organizations to manage both public and private clouds.

The storage systems that will succeed the most will be ones that have encryption as a central component. Otherwise it will be necessary to encrypt through additional means, and that’s inconvenient.

Conclusion

There has been a lot of hype for the cloud in the last few years, but 2016 will be a year of massive change. As TechCrunch notes, “The transition of the enterprise from private to public clouds is likely to be the most impactful transition in the IT data center sector in the past three decades.”

Big Banks Talk Containers & Cloud

Banking

In the finance sector, data safety is paramount. The adoption of cloud and containers by two huge banks demonstrates that these technologies are of use to virtually any organization.

  • The Growing Financial Cloud
  • Simplification at the Big Banks
  • Streamlining & Public Cloud
  • Forest, Not Trees
  • Top Performance & Security

The Growing Financial Cloud

For financial firms, the question of using the third-party virtual systems of public cloud is challenging: financial information must be highly secure both to maintain reputation and to stay in line with regulations. Any data solution used by banks must be carefully scrutinized prior to deployment.

Two of the findings from a poll published in the spring by the Cloud Security Alliance suggest that the financial industry, like healthcare and the rest of business, are increasingly comfortable with cloud technology, particularly public cloud:

  • More than four out of five finance-sector companies that are still establishing their cloud plan will use public options (as opposed to private or hybrid ones), at least in part.
  • Almost three-quarters of poll participants were currently transitioning from hybrid to public models.

Those “two statistics show added comfort and assurance when practicing in the cloud and [are] an encouraging sign of maturity in cloud confidence,” according to the team who designed the poll.

Goldman Sachs and Bank of America are cases in point.

Simplification at the Big Banks

Since 2009, Goldman Sachs has been gradually shifting its computing to the cloud. Today, approximately 85% of its infrastructure is cloud-based, according to the company’s cloud director, J Ram. As cloud has risen in finance, so have application containers – and that’s true both at Goldman Sachs and Bank of America.

These companies are technologically massive. While Bank of America has 9000 infrastructure personnel and almost 18,000 developers, Goldman Sachs has more than 4000 applications and 8000 developers.

Beginning in the spring of 2014, containers (dominated by the emergence of the increasingly popular Docker) became increasingly accepted as a trusted way to package apps for easier portability. Ram notes that containers, already used in some production environments at Goldman, are essentially about better integrating the various elements of IT (development, operations, and administration) into a more easily controlled linear progression.

“[Containers] allow infrastructure folks to start optimizing the platform, application teams to think about application delivery and operation teams to think about operational scale and handling that complexity,” he says.

Bank of America has not yet cleared any containers for production, according to the company’s technology strategist, Ryan Thomas. However, they are currently being used in dozens of development and test settings, with broad acceleration planned for 2016.

Thomas thinks that containers are revolutionary because they allow the bank’s developers and datacenter employees to become much more efficient. They help the business shift focus away from enterprise service buses and middleware, amplifying productivity and innovation. It’s primarily about redirecting activity rather than cutting costs.

“Simplifying that and really flipping ratios of people who are just maintaining, supporting, managing applications, to people who are pushing the applications forward and bringing more value for our customers is the foundation of the goal,” he explains.

Streamlining & Public Cloud

This seismic technological shift isn’t just about individual roles but about consolidating systems. Bank of America is in the process of closing nearly 90% of its data centers (64 in 2014 reduced to 8 by December 2016). That change is in large part because of its continuing public cloud adoption. The same trend is seen at Goldman Sachs.

These general IT migrations are certainly not as simple as pushing a button in the case of a large enterprise. For instance, Bank of America has an extraordinarily complex infrastructure because of mergers and acquisitions. Similarly, Goldman Sachs ran into typical obstacles including refashioning configuration and incorporating legacy source code into containers and cloud.

Forest, Not Trees

Another aspect of the size of these two organizations is that it’s easy for managers to think about improving their own piece rather than considering the whole puzzle. Instead of getting excited about what a breakthrough tool can do for 0.5% of workloads, it’s better to think about adjustments that can generally upgrade your efficiency.

Again, another major element is data safety, as Thomas reiterates. ““As you move into a container world… you’re moving more and more into a world that hasn’t been vetted with the compliance and regulatory environments,” he says, “and that’s a challenge for us, and that’s always somewhat of a lag for us.”

Top Performance & Security

Are you looking for public cloud that is top-echelon both in terms of performance and security? At Superb Internet, our cloud is better than the majority of providers for a number of reasons:

  • We use distributed storage, so there are no bottlenecks.
  • We use InfiniBand for dozens of times lower latency.
  • Our cloud plans are never oversold, allowing our confident guarantees.
  • Our certifications include SSAE-16, ISO 27001:2013, and ISO 9001:2008.

Ready to move forward? Get FREE cPanel with any new cloud plan.

The Greatest Vulnerability in your Network: Users

vulnerability

The most thorough firewalls are useless against oblivious users, who are duped into inviting malware and spyware onto secure networks. Users are, more often than not, the biggest weakness in your network’s security, and hackers are increasingly using social engineering to gain access to secure data.

Human Hacking

Social engineering, much like classic hacking, takes note of unintentional patterns and finds openings in otherwise secure environments. Human-hacking takes advantage of our unconscious decision making patterns to gain access to secure networks.

Trojan Horses

Hackers take advantage of our assumptions about what kinds of devices and hard media are “safe.” Even air-gapped networks are vulnerable to these trojan horses. For example, hackers will leave USBs with reconnaissance software on a reception desk or in the parking lot of a business, trusting that some good samaritan will plug it into a secure computer, to see if they can identify the owner. Meanwhile, the device is taking note of the network map and transmitting that information as soon as it is plugged into a networked computer. And of course, any company with a bring-your own-device policy is highly vulnerable. Even when personal devices for work use are prohibited, in air-gapped offices, employees itching for that email or Facebook fix often turn their cell phone into a hotspot to connect work devices, however briefly, to the internet.

Malware can also be hidden within files that appear to be legitimate communication. One famous hack involved a hacker posing as a conference photographer, taking pictures of attendees during social functions, and then sending out the photos with malicious code embedded in the images.

Clever Disguises

Some USBs are programmed to appear to the computer as another kind of external device, such as a keyboard, so they can enter malicious commands. CDs and DVDs of all kinds can also hide malware and spyware. Sophisticated hackers have even intercepted shipments of software CDs, hard disk drives and other devices, installed malware, rewrapped it–reproducing shrink wrapping, packaging,  etc.– and sent it along to be installed by unsuspecting IT pros. This malware infects the firmware of hard disk drives prior to the OS load, creating a secret storage vault that survives military-grade disk wiping, formatting, and encryption. Vendors that were impacted by this type of hack include Maxtor, Samsung, IBM, Toshiba, and others.

Another example of infiltration disguised as innocuous activity are viruses that impersonate a device’s network interface card so that when the user searches for password protected sites, it can redirect to a dummy site that records the password.

Prevention: User Policies

Given the variety of ways hackers exploit users, what can IT professionals do to keep a network secure? First, a strong, highly-enforceable acceptable-use policy is a must.  Include policies that govern email, websites, and social media usage. Consider disallowing external devices. Tie compliance with this policy to promotion, advancement, or pay raises. Some highly secure organizations terminate employees for breaching these policies.

To discourage employees from visiting dangerous sites, you can send out an email every week with a recording of their web usage. They’re likely to be more careful when they know they’re being watched.

Prevention: Admin Policies 

On the admin side, IT departments should insist on user-access control and never make average users admins. Limiting their access also limits the chaos unleashed by their lapses in judgement.

Finally, all network equipment that comes into the office, from hard disk drives to network interface cards, must got through the IT department. IT pros should look carefully to make sure tamper-proof packaging is intact, to help prevent compromised devices from accessing your data.

Byline: Leslie Rutberg is a tech and IT industry blogger for CBT Nuggets. This article was based on their recent webinar “10 Tips for Locking Down End-User Security.”

Spotlight on the FISMA Risk Management Framework (RMF)

Check Mark

  • RMF Definition & Foundation
  • Framing Security in Terms of Risk – 6 Steps
  • The Amorphous Nature of the RMF
  • RMF Supporting Documents
  • Taking the Pain Out of FISMA Compliance

RMF Definition & Foundation

Risk Management Framework (RMF) is the name for a structured approach to implementing high security on any IT system used by the federal government, including those of hosting providers. An effort to better standardize best practices within the public sector, the RMF is an update of the Certification and Accreditation (C & A) model used previously by federal agencies, security contractors, and the Pentagon.

The Risk Management Framework is a crucial component to meeting the requirements of the Federal Information Security Management Act (FISMA compliance). Its core tenets are derived from reports issued by the Committee on National Security Systems (CNSS) and the National Institute of Standards and Technology (NIST).

“The selection and specification of security controls for an information system is accomplished as part of an organization-wide information security program that involves the management of organizational risk,” explains NIST, defining that term as “the risk to the organization or to individuals associated with the operation of an information system.”

Risk management is critical to security. The framework makes determining appropriate controls more efficient …, enhancing consistency (though modulated by specific attributes of individual systems) throughout the federal infrastructure.

Framing Security in Terms of Risk – 6 Steps

As its name suggests, the RMF positions understanding risk as central to establishing security. The framework follows a basic step-by-step process which is usable with newly adopted systems as well as anything currently in operation:

1. Determine the category

Via an impact analysis, figure out the risk category to which a system and its data belongs. “The security categories are based on the potential impact on an organization should certain events occur which jeopardize the information and information systems needed by the organization,” says NIST. “Security categories are to be used in conjunction with vulnerability and threat information in assessing the risk.”

2. Choose controls

The category tells you what security mechanisms are needed at a minimum. Adjust and bolster the controls as appropriate.

3. Adopt the new tools

Install the tools, keeping records of everything that you do.

4. Analyze

Analyze the controls to make sure that they have been installed adequately and are successfully performing the function for which they were selected.

5. Confirm

Confirm that the risk presented by the environment, with all security mechanisms installed, is acceptable for authorized use.

6. Continually monitor

Monitor the system as time goes on (with assessments, analyses, and notations of changes).

The Amorphous Nature of the RMF

The National Institute of Standards and Technology notes that the six core principles or steps of the RMF provide a nutshell understanding, while the specific rules and standards issued by NIST offer a more granular view on assessments and controls.

Like risk itself, the RMF is a bit amorphous – especially true of the supporting materials. Because of that, NIST notes that sometimes one paper will reference language in another paper that has been replaced with a new version.

RMF Supporting Documents

Here are the three major categories of general supporting materials, which point to more thorough reports:

  1. FAQ

The frequently asked questions take information from numerous papers to advise on the six core concepts. The FAQ questions all fit into one of four categories, according to NIST: “general information …, fundamental knowledge needed to understand and implement the activities …, guidance to help organizations prepare for and implement the step, and step-by-step guidance [for] applying the step to individual information systems.”

  1. Roles and Responsibilities Charts

These charts establish what’s happening with your people, identifying who is taking charge of certain aspects.

3. Quick Start Guides

Just like a brief, to-the-point manual that comes along with a new printer or shredder, these NIST publications are nugget overviews of the reports pertaining to each RMF step. There are multiple guides for all of the steps – one from a management perspective and others directed toward the main people who will be putting systems into place.

Standardly the entities that handle categorization (step 1), for instance, are the owners of the data and the office that handles IT security. To accommodate those different audiences, advice is provided to both party types in that literature.

While these brief manuals are intended to be helpful, they are limited in scope. “The Quick Start Guides provide implementation guidance and examples on how to plan for, conduct, and document the results,” says NIST. “While the guides provide examples and sample documentation, they are not mandatory nor do they prescribe required formats.”

Taking the Pain Out of FISMA Compliance

Do you need a FISMA-compliant partner? One way to reduce your risk is to work with Superb Internet. Our team of engineers and security technicians are available every day, all day, for consultations and assistance – working with you to secure your environment and to apply appropriate FISMA security controls.

By Kent Roberts

Building Risk-Awareness into the FISMA-Compliant Cloud (Part 2)

Compliance

NOTE: This Part 2…to read Part 1, please click HERE.

  • Cloud Continuous Monitoring Plans – 6 Elements [continued]
  • Humans & Machines
  • Why is Ongoing Monitoring So Critical?
  • Compliance & the Myth of “The Cloud”

Technology [continued]

The provider wants the system to be user-friendly for its staff, so its environment will typically be fronted by and coordinated within a control panel. In that setting, the analysts can look at the technology’s suggestions and determine what needs attention — a more sophisticated human outlook on any possible threats. The cloud company or federal office will figure out how frequently reports should be issued. A control panel should have real-time information, allowing the CSP to get up-to-the-second information and fix problems as they arise.

Instructional guides

It’s also necessary to properly educate everyone working on your ongoing monitoring team. Create instructional guides, conduct regular staff training, and otherwise share information throughout your IT staff.

“Thorough and practical training should be part of the continuous monitoring strategy itself,” says Svec, “and this training should cover the processes involved with … all elements of a continuous monitoring strategy, as well as training on the security tools being used.”

Also note that the cloud company’s ongoing monitoring should be completely agile and adaptive, with updates made to training materials as monitoring information provides an ever-changing picture of the threat landscape.

Assessments

You want to keep checking the security controls you have in place. By assessing the entire system, from all angles, you make sure you’re acting competently at all times. Conduct these assessments with five steps:

  1. Discuss the system with a spectrum of relevant parties.
  2. Look at the current policy carefully.
  3. Analyze the parameters of your IT environment.
  4. Run a test to verify smooth operation.
  5. Determine what tasks are performed through automation and through human activity, in part to determine how appropriate your approaches are.

When you have the right technology implemented, you can check that everything is working properly at predetermined times, scheduling both your tools and your people. Essentially, your security stance should blend your protective software and hardware with continually developing knowledge to stay ahead of emergent threats.

Svec says that the interval at which these checks are performed should not be random but determined by cost-benefit. Look at the amount of associated risk in relationship to the amount of time and resources necessary to conduct these assessments. “In an efficient continuous monitoring model,” he adds, “risks are identified and fixed quickly.”

Documentation

Everything must be documented, with the resulting reports submitted to the correct individuals in order for the information revealed in the assessment to be actionable. When cloud providers retool their reporting mechanisms to fit their own needs, they can act soundly and efficiently in response to threats. Their monitoring UI should give them easy and reliable access to all information, estimates of relevant vulnerability, and any details on fixing the problem.

People who specialize in security need to work with those who design infrastructure to form a more integrated approach toward the information, operating via a policy built for speed.

Execution

Ongoing monitoring requires a proactive response toward incoming data. Fast response and remediation is critical. In order to resolve problems, cloud providers that prioritize security and compliance follow three steps:

  1. Determine how much risk a given threat or parameter represents.
  2. Try a remediation tactic.
  3. If the tactic works, support it.

Svec stresses that conflicts of interest must be avoided: “Depending on the relationship of the third-party security assessor to the government agency or cloud provider,” he says, “a secondary assessor may be involved at this last stage in order to preserve independent assessor status.”

Humans & Machines

Federal offices and cloud companies have a lot to gain from the perspectives of those who specialize in controls, assessment, and system design. Plus, they should have access to a wide range of self-guided  technology.

This method to conduct ongoing monitoring threads together the strengths of well-built automation software and the human side (assessment itself and administration), with the primary emphasis on making the information actionable.

Why is Ongoing Monitoring so Critical?

“A well‐designed and well‐managed continuous monitoring program can effectively transform an otherwise static and occasional security control assessment and risk determination process into a dynamic process,” says the National Institute of Standards and Technology, which adds that it has the capacity to deliver fundamental, near real-time risk data to key stakeholders.

Compliance & the Myth of “The Cloud”

Notice above how speed is a critical factor. Of course it is. The cloud is fast, so you should be fine, right?

Consider that there is no such thing as “the cloud” – the speed of your cloud service varies considerably based on your service partner. Superb Internet leverages Infiniband rather than Ethernet and distributes storage rather than centralizing it, delivering significantly more impressive local disk I/O.

Would you like FISMA compliant-ready cloud with actual performance measurements that are typically 4 times better than Amazon and SoftLayer? Choose our PassMark-rated cloud servers.

By Kent Roberts

Building Risk-Awareness into the FISMA-Compliant Cloud (Part 1)

Compliance

FISMA compliance requires ongoing monitoring that adapts as the threat landscape evolves. To stay vigilant with information security, monitoring must become dynamic, aware of emergent risks in near real-time.

  • NIST Risk Management Framework – 6 Steps
  • Cloud Ongoing Monitoring Plans – 6 Elements
  • FISMA-Compliant Partnership

In order to meet the requirements of the Federal Information Security Management Act, government agencies and cloud companies have to follow strict rules and parameters. One that requires refined best practices is the need for ongoing monitoring.

To standardize federal efforts and clarify how to maintain compliance, the FISMA recommendations published by the National Institute of Standards and Technology (NIST) contain a risk management framework that details how public-sector entities and technology services can monitor risk.

NIST Risk Management Framework – 6 Steps

The framework basically delineates how to set up a risk management apparatus and keep it rolling. NIST explains why focusing on risk is so important: “The management of organizational risk is a key element in the organization’s information security program and provides an effective framework for selecting the appropriate security controls for an information system—the security controls necessary to protect individuals and the operations and assets of the organization.”

There are six basic steps that can be taken to manage risk within your legacy systems, per the framework. Pay special attention to the sixth one:

Step 1 – Label

Conduct an impact analysis so you understand all possible negative consequences to particular parts of your infrastructure. Label each system and dataset in terms of that impact.

Step 2 – Choose

Determine what bare-minimum security tools should be used on that system because of the way you have labeled it. Choose appropriate tools, adding more if that seems wise according to your general risk analysis.

Step 3 – Deploy

Activate all mechanisms. Record how you installed the tools and put them into action.

Step 4 – Test

Test your system to make sure that all your protections are set up in the strongest, most coherent way.

Step 5 – Confirm

Sign off that the IT environment is safe to use following your evaluation of vulnerability, which extends from the agency’s own activities and resources to people, from additional agencies to the United States as a whole. A signature means that you have mitigated risk to an acceptable level.

Step 6 – Monitor

Finally, it is necessary to analyze the security tools that you have established for your IT infrastructure continually, says the NIST, so you can determine if your controls are working, record any adjustments, apply any impact analysis findings, and communicate security details to leadership as relevant.

Cloud Ongoing Monitoring Plans – 6 Elements

That final step deserves its own dedicated consideration, according to Veris Group cybersecurity consultant David Svec. “Continuous monitoring for FISMA compliance requires cloud providers to shift from a traditionally static approach to a cyclical, more dynamic strategy in order to provide the near real-time situational awareness they need to make evidence-based security decisions,” he explains, adding that awareness of moment-by-moment security creates stronger compliance with additional security standards, including PCI, HIPAA, HITECH, and SOX.

Ongoing monitoring is not really about taking notes on threats. It’s about continually tweaking and modifying security. To implement a strong ongoing monitoring plan for a cloud service, as we have, you must include administration, technology, instructional guides, assessments, documentation, and execution.

Administration

In order for ongoing monitoring to change appropriately given the context, general IT governance – establishment of accountability and responsibility – and administration are essential. These three components are necessary for sound management:

  • Ongoing monitoring plan – Cloud service providers (CSP’s) need to have a strategic plan that is verified by their top leaders and technology directors. The plan should state what specifically is done in order to perform regular analyses. The document itself should be reviewed periodically as well.
  • Integration – Cloud companies want for their monitoring to be meshed into general business operations via establishment of responsibility. “This approach allows the strategy to become an integrated and ongoing part of business operations — not a special add-on,” says “This reinforces the focus on real-time monitoring over point-in-time assessments.”
  • Action – To move forward in an organized fashion, CSP’s need to know exactly what they are going to do, which is why it’s important not just to document policies but procedures as well. Firms don’t just want to figure out how their systems can fail. Once they have assigned accountability for all aspects of monitoring, they have identified those who must lead remediation.

Careful, comprehensively documented administration allows you to continually monitor in a no-frills way that resolves issues efficiently and in the absence of confusion.

Technology

Figuring out what the best devices and applications are to maintain security is of course fundamental for CSP’s to succeed with ongoing monitoring. “A cost-effective approach is for cloud providers or agencies to take stock of their existing environmental sensors,” comments Svec, “and then determine what new security and reporting tools are required to provide the appropriate level of automation.”

FISMA-Compliant Partnership

Do you need a FISMA-compliant environment? At Superb Internet, our systems are based on security best practices that meet or exceed NIST 800.53 rev3 requirements – implemented at the physical, network, system, and operational/management layers. Learn more here.

NOTE: This is the first part of a two-part series. To read Part 2, please click HERE.

By Kent Roberts