Information security is a concept which seeks to protect sensitive information from external sources that are not meant to access it and it is a fundamental consideration in the design and implementation of network frameworks. Khnaser and Hunter (2004) postulates that security in networks should be guided by the CIA phrase which denotes confidentiality, integrity and availability. Confidentiality implies that private information such as that of an individual or an organization should not be accessible to outsiders without authority. This implies that there should be a control mechanism of how data is accessed, without necessarily complicating data access for authorized persons. Integrity means that information in a network should preserve the consistency and accuracy of data against unauthorized or fraudulent alteration. A network framework that does not take caution to shield its internal data from predation is vulnerable to security threats. These can emanate from internal or external users with internal threats being recognized as more prevalent and damaging than the external ones. For the sake of this analysis, an attack on a network refers to unauthorized/ fraudulent access to information stored/ available in a network. A secure network framework is one aimed at reducing the prevalence and ease of attacks or ‘breaking into’ a network. A secure network framework combines a myriad of concepts which should be taken into account in building or connecting to a network. These concepts are; attack prevention, attack detection, attack isolation and attack recovery and are discussed below.
This concept refers to the mechanisms put in place in a network framework to minimize or prevent intrusions. Currently, there exists a myriad of efforts applied in an effort to curb intrusions. Most network attacks originate from applications and systems present on web applications. Fundamentally, malware attacks exploit vulnerabilities arising from network design failures especially due to lack of input validation. Such includes worm attacks, DoS attacks and port scans. One of the most common applications in intrusion prevention is the application of anti-virus softwares which are signature codes designed to spot and prevent attacks either by blockage or deletion. The anti-viruses are common although they do not present a seal-proof prevention, since they act on intrusion once it’s already in a network. Recent developments have been to develop intrusion prevention systems that restrict the entrance of malware programs in a network. Such developments has come in forms such as Cisco’s intrusion prevention- an improvement form the conventional firewall’s designed to not only protect vulnerable computers but also thwart unforeseen attacks (Cisco Systems, 2001).
In incorporating attack prevention, a network should be designed to incorporate the following abilities. First, it should be able to identify security threats of all forms. The conventional anti-viruses use signatures which detect any new attacks. To achieve this, a network system should be able to download new signatures to combat new foreign programs. Although successful in varied times, these new signatures do not present the most desirable prevention tool. The desirable prevention mechanism should therefore be that which has the capability of both identifying and building a decoding prevention system that lenders the attack harmless. Another attack prevention option available is the application of valid checks on portions of some of the operands utilized in security sensitive operations. This application targets script injection and SQL attacks which are the most prevalent forms of web network attacks. Ideally therefore, an attack prevention concept should be incorporated in the design of a network framework especially in supporting softwares. There have been developments in attack prevention systems where softwares are designed for incorporation with hardware network devices for preventing the entry of foreign intrusions. The approach in this case is to design a software and hardware architecture that enhances attack identification, detection and isolation mechanisms that enable the attack to be deactivated appropriately. Such a system is the HACQIT, analyzed by Reynolds et al (2002).
As mentioned earlier, most of the conventional security systems such as firewalls and anti virus softwares are designed to operate either at end nodes or at network vantage points. As such, they are unable to perform a wide range of possible attacks such as in bandwidth attack which may be blocked at the end-nodes yet end up consuming a huge internal network bandwidth making the whole network unusable. There are also attacks such as most DDoS (Distributed Denial of Service) which routinely penetrates firewalls and therefore leave the network vulnerable. This implies that the conventional attack prevention methods do not provide a network frame with a complete seal proof security for mitigation against such attacks. Based on these weaknesses, vendors and researchers of network security programs have suggested the development of attack detection systems or perimeter defences which are designed to operate at the entrance of a network or a subnet. When such a measure is taken, the concept is referred to as intrusion detection. According to Kompella, Singh and Varghese (2004), there are two conventional approaches to intrusion detections, that is, signature detection and anomaly detection. Signature detection is applicable in detecting a specific type of attack, for instance, important and known attacks by worms and viruses. This detection is however weak in detecting other type of attacks such as DDoS and scan attacks which do not have a characteristic signature in a single packet but rather are characterized by malfunctions or unusual behaviours across a spectrum of packets. Anomaly detection on the other hand operates by first identifying a normal behaviour in a network. It achieves this using either change point detections or wavelets. After identifying normal behaviours, it consequently flags off any behaviour that shows a characteristic deviation form the ‘normal network operation’. Attack detection is an improvement to attack prevention since it is aimed at identifying malwares even before they get into the network, unlike in attack prevention which aims at preventing the execution of abnormal behaviours that have already penetrated the network. In designing a network framework, it would be crucial to consider the inclusion of such attack detection systems which should be incorporated in a network entrance to prevent the entry of programs with abnormal behaviours.
The proliferation of malicious programs especially on the World Wide Web makes it increasingly hard to effectively come up with solutions that can continually restrict the entry of malware programs on a network. Despite huge investments in solution seeking to come up with effective systems for prevention and detection, the prevalence of worms and viruses in networked computers is still high. For organizations, the continued proliferation of such programs poses a continuous danger and may therefore be at times forced to isolate their local networks from other networks. Most attackers utilize the world wide web avenue to attack vulnerable computers which implies that if an organization can access an easy to use local network without connecting to the web, their systems would be isolated from attacks through the web. Isolation therefore is a concept of separation of systems from an existing network to prevent an avenue for malwares perforation (Yegulalp, 2006). A number of applications for isolation are normally used and may be effective in different scenarios. The common isolation methods include; firewalling, virtual network segmentation or sub-netting, IPSec and clean room isolation. In firewalling, all versions of windows contain an inbuilt firewall which can be set to restrict access to unwanted information. In sub-netting, an individual computer is connected to a local network containing sources of needed information. This implies that other users such as those in the web cannot access it. IPSec is an isolation mechanism in which a server machine is fed with encrypted packets that allows it to exchange information with trusted clients who are regulated by policies set on the server (Microsoft Technet, 2009). The server is therefore enabled to regulate the kind of clients it can access and this reduces the possibility of being accessed by unknown sources. A clean room isolation is one in which a computer machine is disconnected from a network. Such a system may be necessary in cases where crucial information needs to be protected from any possible networks.