User talk:Thesandeepsinghdon

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search
Welcome to Wikimedia Commons, Thesandeepsinghdon!

-- Wikimedia Commons Welcome (talk) 17:08, 6 October 2016 (UTC)[reply]

netwok

[edit]

Firewall (computing) From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the network security device. For other uses, see Firewall.

	This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2008)


An illustration of where a firewall would be located in a network.


An example of a user interface for a firewall on Ubuntu (Gufw) A firewall is a device or set of devices designed to permit or deny network transmissions based upon a set of rules and is frequently used to protect networks from unauthorized access while permitting legitimate communications to pass. Many personal computer operating systems include software-based firewalls to protect against threats from the public Internet. Many routers that pass data between networks contain firewall components and, conversely, many firewalls can perform basic routing functions. [1] Contents

[hide] 

• 1 History o 1.1 First generation: packet filters o 1.2 Second generation: "stateful" filters o 1.3 Third generation: application layer • 2 Types o 2.1 Network layer or packet filters o 2.2 Application-layer o 2.3 Proxies o 2.4 Network address translation • 3 See also • 4 References • 5 External links

[edit] History The term firewall originally referred to a wall intended to confine a fire or potential fire within a building. Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity. The predecessors to firewalls for network security were the routers used in the late 1980s:[2] • Clifford Stoll's discovery of German spies tampering with his system[2] • Bill Cheswick's "Evening with Berferd" 1992 in which he set up a simple electronic to observe an attacker[2] • In 1988, an employee at the NASA Ames Research Center in California sent a memo by email to his colleagues[3] that read, "We are currently under attack from an Internet VIRUS! It has hit Berkeley, UC San Diego, Lawrence Livermore, Stanford, and NASA Ames." • The Morris Worm spread itself through multiple vulnerabilities in the machines of the time. Although it was not malicious in intent, the Morris Worm was the first large scale attack on Internet security; the online community was neither expecting an attack nor prepared to deal with one.[4] [edit] First generation: packet filters The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what became a highly involved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based on their original first generation architecture.[5] Packet filters act by inspecting the "packets" which transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source). This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (i.e. it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number).[6] TCP and UDP protocols constitute most communication over the Internet, and because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports.[7] Packet filtering firewalls work mainly on the first three layers of the OSI reference model, which means most of the work is done between the network and physical layers, with a little bit of peeking into the transport layer to figure out source and destination port numbers.[8] When a packet originates from the sender and filters through a firewall, the device checks for matches to any of the packet filtering rules that are configured in the firewall and drops or rejects the packet accordingly. When the packet passes through the firewall, it filters the packet on a protocol/port number basis (GSS). For example, if a rule in the firewall exists to block telnet access, then the firewall will block the TCP protocol for port number 23. [9] [edit] Second generation: "stateful" filters Main article: Stateful firewall From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit level firewalls.[citation needed] Second-generation firewalls perform the work of their first-generation predecessors but operate up to layer 4 (transport layer) of the OSI model. They examine each data packet as well as its position within the data stream. Known as stateful packet inspection, it records all connections passing through it determines whether a packet is the start of a new connection, a part of an existing connection, or not part of any connection. Though static rules are still used, these rules can now contain connection state as one of their test criteria.[citation needed] Certain denial-of-service attacks bombard the firewall with thousands of fake connection packets to in an attempt to overwhelm it by filling up its connection state memory.[citation needed] [edit] Third generation: application layer Main article: Application layer firewall The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect if an unwanted protocol is sneaking through on a non-standard port or if a protocol is being abused in any harmful way.

The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS). Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes. Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity-based firewalling, by requesting the user's signature for each connection. authpf on BSD systems loads firewall rules dynamically per user, after authentication via SSH. [edit] Types There are different types of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced. [10] [edit] Network layer or packet filters Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator may define the rules; or default rules may apply. The term "packet filter" originated in the context of BSD operating systems. Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain context about active sessions, and use that "state information" to speed packet processing. Any existing network connection can be described by several properties, including source and destination IP address, UDP or TCP ports, and the current stage of the connection's lifetime (including session initiation, handshaking, data transfer, or completion connection). If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing. Stateless firewalls require less memory, and can be faster for simple filters that require less time to filter than to look up a session. They may also be necessary for filtering stateless network protocols that have no concept of a session. However, they cannot make more complex decisions based on what stage communications between hosts have reached. Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, of the source, and many other attributes. Commonly used packet filters on various versions of Unix are ipf (various), ipfw (FreeBSD/Mac OS X), pf (OpenBSD, and all other BSDs), iptables/ipchains (Linux). [edit] Application-layer Main article: Application layer firewall Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgment to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines. On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. The additional inspection criteria can add extra latency to the forwarding of packets to their destination. Application firewalls function by determining whether a process should accept any given connection. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers of the OSI model. Application firewalls that hook into socket calls are also referred to as socket filters. Application firewalls work much like a packet filter but application filters apply filtering rules (allow/block) on a per process basis instead of filtering connections on a per port basis. Generally, prompts are used to define rules for processes that have not yet received a connection. It is rare to find application firewalls not combined or used in conjunction with a packet filter.[11] Also, application firewalls further filter connections by examining the process ID of data packets against a ruleset for the local process involved in the data transmission. The extent of the filtering that occurs is defined by the provided ruleset. Given the variety of software that exists, application firewalls only have more complex rulesets for the standard services, such as sharing services. These per process rulesets have limited efficacy in filtering every possible association that may occur with other processes. Also, these per process ruleset cannot defend against modification of the process via exploitation, such as memory corruption exploits. Because of these limitations, application firewalls are beginning to be supplanted by a new generation of application firewalls that rely on mandatory access control (MAC), also referred to as sandboxing, to protect vulnerable services. An example of a next generation application firewall is AppArmor included in some Linux distributions.[12] [edit] Proxies Main article: Proxy server A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, while blocking other packets. Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network. [edit] Network address translation Main article: Network address translation Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall commonly have addresses in the "private address range", as defined in RFC 1918. Firewalls often have such functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the limited number of IPv4 routable addresses that could be used or assigned to companies or individuals as well as reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an organization. Hiding the addresses of protected devices has become an increasingly important defense against network reconna

  Hardware virtualization From Wikipedia, the free encyclopedia

 (Redirected from Platform virtualization)

Jump to: navigation, search

	This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (April 2010)

Computer hardware virtualization is the virtualization of computers or operating systems. It hides the physical characteristics of a computing platform from users, instead showing another abstract computing platform.[1][2] At its origins, the software that controlled virtualization was called a "control program", but nowadays the terms "hypervisor" or "virtual machine monitor" are preferred.[citation needed] Contents

[hide] 

• 1 Concept • 2 Reasons for virtualization • 3 Full virtualization • 4 Hardware-assisted virtualization • 5 Partial virtualization • 6 Paravirtualization • 7 Operating system-level virtualization • 8 Hardware virtualization disaster recovery • 9 See also • 10 References • 11 External links

[edit] Concept The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo machine"), a term which itself dates from the experimental IBM M44/44X system.[citation needed] The creation and management of virtual machines has been called "platform virtualization", or "server virtualization", more recently. Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine (VM), for its guest software. The guest software is not limited to user applications; many hosts allow the execution of complete operating systems. The guest software executes as if it were running directly on the physical hardware, with several notable caveats. Access to physical system resources (such as the network access, display, keyboard, and disk storage) is generally managed at a more restrictive level than the host processor and system-memory. Guests are often restricted from accessing specific peripheral devices, or may be limited to a subset of the device's native capabilities, depending on the hardware access policy implemented by the virtualization host. Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and as well as in reduced performance on the virtual machine compared to running native on the physical machine. [edit] Reasons for virtualization • In the case of server consolidation, many small physical servers are replaced by one larger physical server to increase the utilization of costly hardware resources such as CPU. Although hardware is consolidated, typically OSs are not. Instead, each OS running on a physical server becomes converted to a distinct OS running inside a virtual machine. The large server can "host" many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V) transformation. • A virtual machine can be more easily controlled and inspected from outside than a physical one, and its configuration is more flexible. This is very useful in kernel development and for teaching operating system courses.[3] • A new virtual machine can be provisioned as needed without the need for an up-front hardware purchase. • A virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to his laptop, without the need to transport the physical computer. Likewise, an error inside a virtual machine does not harm the host system, so there is no risk of breaking down the OS on the laptop. • Because of the easy relocation, virtual machines can be used in disaster recovery scenarios. However, when multiple VMs are concurrently running on the same physical host, each VM may exhibit a varying and unstable performance, which highly depends on the workload imposed on the system by other VMs, unless proper techniques are used for temporal isolation among virtual machines. There are several approaches to platform virtualization. Examples of virtualization scenarios: • Running one or more applications that are not supported by the host OS: A virtual machine running the required guest OS could allow the desired applications to be run, without altering the host OS. • Evaluating an alternate operating system: The new OS could be run within a VM, without altering the host OS. • Server virtualization: Multiple virtual servers could be run on a single physical server, in order to more fully utilize the hardware resources of the physical server. • Duplicating specific environments: A virtual machine could, depending on the virtualization software used, be duplicated and installed on multiple hosts, or restored to a previously backed-up system state. • Creating a protected environment: if a guest OS running on a VM becomes damaged in a way that is difficult to repair, such as may occur when studying malware or installing badly-behaved software, the VM may simply be discarded without harm to the host system, and a clean copy used next time. [edit] Full virtualization Main article: Full virtualization


Logical diagram of full virtualization. In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS (one designed for the same instruction set) to be run in isolation. This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family. Examples outside the mainframe field include Parallels Workstation, Parallels Desktop for Mac, VirtualBox, Virtual Iron, Oracle VM, Virtual PC, Virtual Server, Hyper-V, VMware Workstation, VMware Server (formerly GSX Server), KVM, QEMU, Adeos, Mac-on-Linux, Win4BSD, Win4Lin Pro, and Egenera vBlade technology. [edit] Hardware-assisted virtualization Main article: Hardware-assisted virtualization In hardware-assisted virtualization, the hardware provides architectural support that facilitates building a virtual machine monitor and allows guest OSes to be run in isolation.[4] Hardware-assisted virtualization was first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. In 2005 and 2006, Intel and AMD provided additional hardware to support virtualization. Sun Microsystems (now Oracle Corporation) added similar features in their UltraSPARC T-Series processors in 2005. Examples of virtualization platforms adapted to such hardware include Linux KVM, VMware Workstation, VMware Fusion, Microsoft Hyper-V, Microsoft Virtual PC, Xen, Parallels Desktop for Mac, Oracle VM Server for SPARC, VirtualBox and Parallels Workstation. Hardware platforms with integrated virtualization technologies include: • x86 (and x86-64)—AMD-V (previously known as Pacifica), Intel VT-x (previously known as Vanderpool) o IOMMU implementations by both AMD and Intel. • Power Architecture (IBM, Power.org) • Virtage (Hitachi) • UltraSPARC T1, T2, T2+, SPARC T3 (Oracle Corporation) In 2006 first-generation 32- and 64-bit x86 hardware support was found rarely to offer performance advantages over software virtualization[5]. [edit] Partial virtualization

	This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (April 2010)

In partial virtualization, including address space virtualization, the virtual machine simulates multiple instances of much of an underlying hardware environment, particularly address spaces.[clarification needed] Usually, this means that entire operating systems cannot run in the virtual machine – which would be the sign of full virtualization – but that many applications can run. A key form of partial virtualization is address space virtualization, in which each virtual machine consists of an independent address space. This capability requires address relocation hardware, and has been present in most practical examples of partial virtualization.[citation needed] Partial virtualization was an important historical milestone on the way to full virtualization. It was used in the first-generation time-sharing system CTSS, in the IBM M44/44X experimental paging system, and arguably systems like MVS and the Commodore 64 (a couple of 'task switch' programs).[dubious – discuss][citation needed] The term could also be used to describe any operating system that provides separate address spaces for individual users or processes, including many that today would not be considered virtual machine systems. Experience with partial virtualization, and its limitations, led to the creation of the first full virtualization system (IBM's CP-40, the first iteration of CP/CMS which would eventually become IBM's VM family). (Many more recent systems, such as Microsoft Windows and Linux, as well as the remaining categories below, also use this basic approach.[dubious – discuss][citation needed]) Partial virtualization is significantly easier to implement than full virtualization. It has often provided useful, robust virtual machines, capable of supporting important applications. Partial virtualization has proven highly successful for sharing computer resources among multiple users.[citation needed] However, in comparison with full virtualization, its drawback is in situations requiring backward compatibility or portability. It can be hard to anticipate precisely which features have been used by a given application. If certain hardware features are not simulated, then any software using those features will fail. [edit] Paravirtualization Main article: Paravirtualization In paravirtualization, the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying[clarification needed] the "guest" OS. This system call to the hypervisor is called a "hypercall" in TRANGO and Xen; it is implemented via a DIAG ("diagnose") hardware instruction in IBM's CMS under VM[clarification needed] (which was the origin of the term hypervisor). Examples include IBM's LPARs,[6] Win4Lin 9x, Sun's Logical Domains, z/VM,[citation needed] and TRANGO. [edit] Operating system-level virtualization Main article: Operating system-level virtualization In operating system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" OS environments share the same OS as the host system – i.e. the same OS kernel is used to implement the "guest" environments. Applications running in a given "guest" environment view it as a stand-alone system. The pioneer implementation was FreeBSD jails; other examples include Solaris Containers, OpenVZ, Linux-VServer, AIX Workload Partitions, Parallels Virtuozzo Containers, and iCore Virtual Accounts. [edit] Hardware virtualization disaster recovery A disaster recovery (DR) plan is good business practice for a hardware virtualization platform solution. DR of a virtualization environment can ensure high rate of availability during a wide range of situations that disrupt normal business operations. Continued operations of VMs is mission critical and a DR can compensate for concerns of hardware performance and maintenance requirements. A hardware virtualization DR environment will involve hardware and software protection solutions based on business continuity needs.[7][8] Hardware virtualization DR methods: Tape backup for software data long-term archival needs This common method can be used to store data offsite but can be a difficult and lengthy process to recover your data. Tape backup data is only as good as the latest copy stored. Tape backup methods will require a backup device and ongoing storage material. Whole-file and application replication The implementation of this method will require control software and storage capacity for application and data file storage replication typically on the same site. The data is replicated on a different disk partition or separate disk device and can be a scheduled activity for most servers and is implemented more for database-type applications. Hardware and software redundancy This solution provides the highest level of disaster recovery protection for a hardware virtualization solutions providing duplicate hardware and software replication in two distinct geographic areas.[9]

  Virtual firewall From Wikipedia, the free encyclopedia Jump to: navigation, search A virtual firewall (VF) is a network firewall service or appliance running entirely within a virtualized environment and which provides the usual packet filtering and monitoring provided via a physical network firewall. The VF can be realized as a traditional software firewall on a guest virtual machine already running, or it can be a purpose-built virtual security appliance designed with virtual network security in mind, or it can be a virtual switch with additional security capabilities, or it can be a managed kernel process running within the host hypervisor. Contents

[hide] 

• 1 Background • 2 Virtual firewall • 3 Operation • 4 VF offerings • 5 Additional reading • 6 References • 7 See also

[edit] Background Structural Fire Walls Before the term "firewall" was applied to network technology a "fire wall" was used in building design and mechanical engineering to designate a wall or partition composed of flame-resistant materials put in place to protect more flammable structural components from catching fire or spreading flames, either accidentally in the case of an unexpected fire, or where flame is usually present and needs to be isolated. Many building codes require fire-rated materials in human-inhabited areas. Fire walls protect vulnerable assets (such as spaces that might contain people) and prevent destructive events from spreading too quickly. Network Firewalls Computer networks are much like any other kind of inhabited space; networks have assets, they have participants and they have rules. Some computer networks are private and others public, some contain sensitive assets and others less so, and so on. Generally users and traffic residing on one network need to be protected from users and traffic on a different but connected network. Yet all these networks are inter-connected to a degree (this being the origin of the term "internetwork" [1] ) — even if they are only connected via sneakernet — and this interconnectedness is both a critical feature and an emergent vulnerability. So it is not too surprising that the idea for "fire wall" functionality as applied to computer networks has been around since the early days of the Internet, and saw serious development in the 1990s. [2] It was eventually recognized that the very same problems applied in networks as existed in construction; containment and isolation. Network engineers began working with routers and packet filters as early containment technology, and from these efforts eventually emerged the sophisticated purpose-built network firewalls common in computer clouds, data centers and home computers. The intent of fire walls and firewalls are much the same; minimize or eliminate damage to important assets by isolating or blocking destructive influences. [edit] Virtual firewall The problem So long as a computer network runs entirely over physical hardware and cabling, it is a physical network. As such it can be protected by physical firewalls and fire walls alike; the first and most important protection for a physical computer network always was and remains a physical, locked, flame-resistant door. [3] [4] Since the inception of the Internet this was the case, and structural fire walls and network firewalls were for a long time both necessary and sufficient. Since about 1998 there has been an explosive increase in the use of "virtual machines" (VM) in addition to — sometimes instead of — physical machines to offer many kinds of computer and communications services on local area networks and over the broader Internet. The advantages of virtual machines are well explored elsewhere. [5][6] Virtual machines can operate in isolation (for example as a guest operating system on a personal computer) or under a unified virtualized environment overseen by a supervisory virtual machine monitor or "hypervisor" process. In the case where many virtual machines operate under the same virtualized environment they might be connected together via a virtual network consisting of virtualized network switches between machines and virtualized network interfaces within machines. The resulting virtual network could then implement traditional network protocols (for example TCP) or virtual network provisioning such as VLAN or VPN, though the latter while useful for their own reasons are in no way required. There is a continued perception that virtual machines are inherently secure because they are seen as "sandboxed" within the host operating system. [7] [8] [9] And the host in like manner is secured against exploitation from the virtual machine itself [9] and the host is no threat to the virtual machine because it is a physical asset protected by traditional physical and network security. [8] And even when this is not explicitly assumed early testing of virtual infrastructures often proceeds in isolated lab environments where security is not as a rule an immediate concern, and security may only come to the fore when the same solution is moving into production or onto a computer cloud, where suddenly virtual machines of different trust levels may wind up on the same virtual network running across any number of physical hosts. Because they are true networks, virtual networks may end up suffering the same kinds of vulnerabilities long associated with a physical network, some of which being:

• Users on machines within the virtual network have access to all other machines on the same virtual network. • Compromising or misappropriating one virtual machine on a virtual network is sufficient to provide a platform for additional attacks against other machines on the same network segment. • If a virtual network is internetworked to the physical network or broader Internet then machines on the virtual network might have access to external resources (and external exploits) that could leave them open to exploitation. • Network traffic that passes directly between machines without passing through security devices is unmonitored.

The problems created by the near invisibility of between-virtual machine (VM-to-VM) traffic on a virtual network are exactly like those found in physical networks, complicated by the fact that the packets may be moving entirely inside the hardware of a single physical host:

• Because the virtual network traffic may never leave the physical host hardware, security administrators cannot observe VM-to-VM traffic, cannot intercept it, and so cannot know what that traffic is for. • Logging of VM-to-VM network activity within a single host and verification of virtual machine access for regulatory compliance purposes becomes difficult. • Inappropriate uses of virtual network resources and bandwidth consumption VM-to-VM are difficult to discover or rectify. • Unusual or inappropriate services running on or within the virtual network could go undetected.

And there are security issues known only in virtualized environments that wreak havoc with physical security measures and practices, and some of these are touted as actual advantages of virtual machine technology over physical machines [10] :

• VMs can be deliberately (or unexpectedly) migrated between trusted and untrusted virtualized environments where migration is enabled. • VMs and/or virtual storage volumes can be easily cloned and the clone made to run on any part of the virtualized environment, including a DMZ. • Many companies use their purchasing or IT departments as the IT security lead agency, applying security measures at the time a physical machine is taken from the box and initialized. Since virtual machines can be created in a few minutes by any authorized user and set running without a paper trail, they can in these cases bypass established "first boot" IT security practices. • VMs have no physical reality leaving not a trace of their creation nor (in larger virtualized installations) of their continued existence. They can be as easily destroyed as well, leaving nearly no digital signature and absolutely no physical evidence whatsoever.

In addition to the network traffic visibility issues and uncoordinated VM sprawl, a rogue VM using just the virtual network, switches and interfaces (all of which run in a process on the host physical hardware) can potentially break the network as could any physical machine on a physical network — and in the usual ways — though now by consuming host CPU cycles it can additionally bring down the entire virtualized environment and all the other VMs with it simply by overpowering the host physical resources the rest of the virtualized environment depend upon. This was likely to become a problem, but it was perceived within the industry as a well understood problem and one potentially open to traditional measures and responses. [11] [12] [13] [14] The solution One method to secure, log and monitor VM-to-VM traffic involved routing the virtualized network traffic out of the virtual network and onto the physical network via VLANs, and hence into a physical firewall already present to provide security and compliance services for the physical network. The VLAN traffic could be monitored and filtered by the physical firewall and then passed back into the virtual network (if deemed legitimate for that purpose) and on to the target virtual machine. Not surprisingly LAN managers, security experts and network security vendors began to wonder if it might be more efficient to keep the traffic entirely within the virtualized environment and secure it from there. [15] [16] [17] [18] Enter the virtual firewall. A virtual firewall (VF) then is a firewall service or appliance running entirely within a virtualized environment — even as another virtual machine, but just as readily within the hypervisor itself — providing the usual packet filtering and monitoring that a physical firewall provides. The VF can be installed as a traditional software firewall on a guest VM already running within the virtualized environment; or it can be a purpose-built virtual security appliance designed with virtual network security in mind; or it can be a virtual switch with additional security capabilities; or it can be a managed kernel process running within the host hypervisor that sits atop all VM activity and so has deep access to the virtual network and its traffic and can even access VM active memory and virtualized storage. The current direction in virtual firewall technology is a combination of security-capable virtual switches [19], and virtual security appliances integrating at the kernel level. [20] [21] [22] [edit] Operation Virtual firewalls can operate in different modes to provide security services, depending on the point of deployment. Typically these are either bridge-mode or hypervisor-mode (hypervisor-based, hypervisor-resident). Both may come shrink wrapped as a virtual security appliance and may install a virtual machine for management purposes. A virtual firewall operating in bridge-mode acts like its physical-world firewall analog; it sits in a strategic part of the network infrastructure — usually at an inter-network virtual switch or bridge — and intercepts network traffic destined for other network segments and needing to travel over the bridge. By examining the source origin, the destination, the type of packet it is and even the payload the VF can decide if the packet is to be allowed passage, dropped, rejected, or forwarded or mirrored to some other device. Initial entrants into the virtual firewall field were largely bridge-mode, and many offers retain this feature. By contrast, a virtual firewall operating in hypervisor-mode is not actually part of the virtual network at all, and as such has no physical-world device analog. A hypervisor-mode virtual firewall resides in the virtual machine monitor or hypervisor where it is well positioned to capture VM activity including packet injections. The entire monitored VM and all its virtual hardware, software, services, memory and storage can be examined, as can changes in these. Further, since a hypervisor-based virtual firewall is not part of the network proper and is not a virtual machine its functionality cannot be monitored in turn or altered by users and software limited to running under a VM or having access only to the virtualized network. Bridge-mode virtual firewalls can be installed just as any other virtual machine in the virtualized infrastructure. Since it is then a virtual machine itself, the relationship of the VF to all the other VM may become complicated over time due to VMs disappearing and appearing in random ways, migrating between different physical hosts, or other uncoordinated changes allowed by the virtualized infrastructure. Hypervisor-mode virtual firewalls require a modification to the physical host hypervisor kernel in order to install process hooks or modules allowing the virtual firewall system access to VM information and direct access to the virtual network switches and virtualized network interfaces moving packet traffic between VMs or between VMs and the network gateway. The hypervisor-resident virtual firewall can use the same hooks to then perform all the customary firewall functions like packet inspection, dropping, and forwarding but without actually touching the virtual network at any point. Hypervisor-mode virtual firewalls can be very much faster than the same technology running in bridge-mode because they are not doing packet inspection in a virtual machine, but rather from within the kernel at native hardware speeds [edit] Description A stateful firewall is able to hold significant attributes of each connection in memory, from start to finish. These attributes, which are collectively known as the state of the connection, may include such details as the IP addresses and ports involved in the connection and the sequence numbers of the packets traversing the connection. The most CPU intensive checking is performed at the time of setup of the connection. All packets after that (for that session) are processed rapidly because it is simple and fast to determine whether it belongs to an existing, pre-screened session. Once the session has ended, its entry in the state-table is discarded. The stateful firewall depends on the three-way handshake of the TCP protocol when the protocol being used is TCP; when the protocol is UDP, the stateful firewall does not depend on anything related to TCP. When a client initiates a new connection, it sends a packet with the SYN bit set in the packet header. All packets with the SYN bit set are considered by the firewall as NEW connections. If the service which the client has requested is available on the server, the service will reply to the SYN packet with a packet in which both the SYN and the ACK bit are set. The client will then respond with a packet in which only the ACK bit is set, and the connection will enter the ESTABLISHED state. Such a firewall will pass all outgoing packets through but will only allow incoming packets if they are part of an ESTABLISHED connection, ensuring that hackers cannot start unsolicited connections with the protected machine. In order to prevent the state table from filling up, sessions will time out if no traffic has passed for a certain period. These stale connections are removed from the state table. Many applications therefore send keepalive messages periodically in order to stop a firewall from dropping the connection during periods of no user-activity, though some firewalls can be instructed to send these messages for applications. Many stateful firewalls are able to track the state of flows in connectionless protocols. UDP hole punching is the technique associated with UDP. Such sessions usually get the ESTABLISHED state immediately after the first packet is seen by the firewall. Sessions in connectionless protocols can only end by time-out. By keeping track of the connection state, stateful firewalls provide added efficiency in terms of packet inspection. This is because for existing connections the firewall need only check the state table, instead of checking the packet against the firewall's rule set, which can be extensive. Also, the concept of deep packet inspection is unrelated to stateful firewalls, because of its stateful feature, which checks incoming traffic against its state table first instead of jumping to the firewall's rule set. In this case if the state table is matched, then it doesn't need deep packet inspection. Stateful packet inspection is typically achieved by using ASIC-accelerated appliances that are specifically engineered to handle Application Layer transactions.[citation needed] Stateful firewall From Wikipedia, the free encyclopedia

 (Redirected from Stateless firewall)

Jump to: navigation, search In computing, a stateful firewall (any firewall that performs stateful packet inspection (SPI) or stateful inspection) is a firewall that keeps track of the state of network connections (such as TCP streams, UDP communication) traveling across it. The firewall is programmed to distinguish legitimate packets for different types of connections. Only packets matching a known active connection will be allowed by the firewall; others will be rejected. Sandbox (computer security) From Wikipedia, the free encyclopedia Jump to: navigation, search

	This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2011)

This article is about the computer security mechanism. For the Wikipedia feature, where newcomers can experiment with editing or established editors can experiment with new features, see Wikipedia:Sandbox. For the software testing practice, see sandbox (software development). In computer security, a sandbox is a security mechanism for separating running programs. It is often used to execute untested code, or untrusted programs from unverified third-parties, suppliers, untrusted users and untrusted websites.[1] The sandbox typically provides a tightly-controlled set of resources for guest programs to run in, such as scratch space on disk and memory. Network access, the ability to inspect the host system or read from input devices are usually disallowed or heavily restricted. In this sense, sandboxes are a specific example of virtualization. [edit] Examples Some examples of sandboxes are: • Applets are self-contained programs that run in a virtual machine or scripting language interpreter that does the sandboxing. In application streaming schemes, the applet is downloaded onto a remote client and may begin executing before it arrives in its entirety. Applets are common in web browsers, which use the mechanism to safely execute untrusted code embedded in web pages. Three common applet implementations—Adobe Flash, Java applets and Silverlight—provide (at minimum) a rectangular window with which to interact with the user and some persistent storage (at the user's permission). • A jail is a set of resource limits imposed on programs by the operating system kernel. It can include I/O bandwidth caps, disk quotas, network access restrictions and a restricted filesystem namespace. Jails are most commonly used in virtual hosting.

	This article may be too technical for most readers to understand. Please help improve this section to make it understandable to non-experts, without removing the technical details. The talk page may contain suggestions. (September 2011)

• Rule-based Execution gives users full control over what processes are started, spawned (by other applications), or allowed to inject code into other apps and have access to the net. It also can control file/registry security (What programs can read and write to the file system/registry) As such, viruses and trojans will have a less likely chance of infecting your PC. The SELinux and Apparmor security frameworks are two such implementations for Linux. • Virtual machines emulate a complete host computer, on which a conventional operating system may boot and run as on actual hardware. The guest operating system is sandboxed in the sense that it does not run natively on the host and can only access host resources through the emulator. • Sandboxing on native hosts: Security researchers rely heavily on sandboxing technologies to analyse malware behaviour. By creating an environment that mimics or replicates the targeted desktops, researchers can evaluate how malware infects and compromises a target host. • Capability systems can be thought of as a fine-grained sandboxing mechanism, in which programs are given opaque tokens when spawned and have the ability to do specific things based on what tokens they hold. Capability based implementations can work at various levels, from kernel to user-space. An example of capability-based user-level sandboxing would be HTML rendering in Google Chrome. • Online judge systems to test programs in programming contests. • New generation pastebins allowing users to execute pasted code snippets. • Linux' Secure Computing Mode (seccomp) is a sandbox built in the Linux kernel. When activated seccomp only allows the write(), read(), exit() and sigreturn() system calls. • HTML5 has a 'sandbox' attribute for use with iframes [2]

  What Firewall Software Does A firewall is simply a program or hardware device that filters the information coming through the Internet connection into your private network or computer system. If an incoming packet of information is flagged by the filters, it is not allowed through. If you have read the article How Web Servers Work, then you know a good bit about how data moves on the Internet, and you can easily see how a firewall helps protect computers inside a large company. Let's say that you work at a company with 500 employees. The company will therefore have hundreds of computers that all have network cards connecting them together. In addition, the company will have one or more connections to the Internet through something like T1 or T3 lines. Without a firewall in place, all of those hundreds of computers are directly accessible to anyone on the Internet. A person who knows what he or she is doing can probe those computers, try to make FTP connections to them, try to make telnet connections to them and so on. If one employee makes a mistake and leaves a security hole, hackers can get to the machine and exploit the hole. With a firewall in place, the landscape is much different. A company will place a firewall at every connection to the Internet (for example, at every T1 line coming into the company). The firewall can implement security rules. For example, one of the security rules inside the company might be: Out of the 500 computers inside this company, only one of them is permitted to receive public FTP traffic. Allow FTP connections only to that one computer and prevent them on all others. A company can set up rules like this for FTP servers, Web servers, Telnet servers and so on. In addition, the company can control how employees connect to Web sites, whether files are allowed to leave the company over the network and so on. A firewall gives a company tremendous control over how people use the network. Firewalls use one or more of three methods to control traffic flowing in and out of the network: • Packet filtering - Packets (small chunks of data) are analyzed against a set of filters. Packets that make it through the filters are sent to the requesting system and all others are discarded. • Proxy service - Information from the Internet is retrieved by the firewall and then sent to the requesting system and vice versa. • Stateful inspection - A newer method that doesn't examine the contents of each packet but instead compares certain key parts of the packet to a database of trusted information. Information traveling from inside the firewall to the outside is monitored for specific defining characteristics, then incoming information is compared to these characteristics. If the comparison yields a reasonable match, the information is allowed through. Otherwise it is discarded. — Preceding unsigned comment added by Thesandeepsinghdon (talk • contribs) 17:20, 06 October 2016 (UTC)[reply]