Show
information system, an integrated set of components for collecting, storing, and processing data and for providing information, knowledge, and digital products. Business firms and other organizations rely on information systems to carry out and manage their operations, interact with their customers and suppliers, and compete in the marketplace. Information systems are used to run interorganizational supply chains and electronic markets. For instance, corporations use information systems to process financial accounts, to manage their human resources, and to reach their potential customers with online promotions. Many major companies are built entirely around information systems. These include eBay, a largely auction marketplace; Amazon, an expanding electronic mall and provider of cloud computing services; Alibaba, a business-to-business e-marketplace; and Google, a search engine company that derives most of its revenue from keyword advertising on Internet searches. Governments deploy information systems to provide services cost-effectively to citizens. Digital goods—such as electronic books, video products, and software—and online services, such as gaming and social networking, are delivered with information systems. Individuals rely on information systems, generally Internet-based, for conducting much of their personal lives: for socializing, study, shopping, banking, and entertainment. As major new technologies for recording and processing information were invented over the millennia, new capabilities appeared, and people became empowered. The invention of the printing press by Johannes Gutenberg in the mid-15th century and the invention of a mechanical calculator by Blaise Pascal in the 17th century are but two examples. These inventions led to a profound revolution in the ability to record, process, disseminate, and reach for information and knowledge. This led, in turn, to even deeper changes in individual lives, business organization, and human governance. The first large-scale mechanical information system was Herman Hollerith’s census tabulator. Invented in time to process the 1890 U.S. census, Hollerith’s machine represented a major step in automation, as well as an inspiration to develop computerized information systems. One of the first computers used for such information processing was the UNIVAC I, installed at the U.S. Bureau of the Census in 1951 for administrative use and at General Electric in 1954 for commercial use. Beginning in the late 1970s, personal computers brought some of the advantages of information systems to small businesses and to individuals. Early in the same decade the Internet began its expansion as the global network of networks. In 1991 the World Wide Web, invented by Tim Berners-Lee as a means to access the interlinked information stored in the globally dispersed computers connected by the Internet, began operation and became the principal service delivered on the network. The global penetration of the Internet and the Web has enabled access to information and other resources and facilitated the forming of relationships among people and organizations on an unprecedented scale. The progress of electronic commerce over the Internet has resulted in a dramatic growth in digital interpersonal communications (via e-mail and social networks), distribution of products (software, music, e-books, and movies), and business transactions (buying, selling, and advertising on the Web). With the worldwide spread of smartphones, tablets, laptops, and other computer-based mobile devices, all of which are connected by wireless communication networks, information systems have been extended to support mobility as the natural human condition. As information systems enabled more diverse human activities, they exerted a profound influence over society. These systems quickened the pace of daily activities, enabled people to develop and maintain new and often more-rewarding relationships, affected the structure and mix of organizations, changed the type of products bought, and influenced the nature of work. Information and knowledge became vital economic resources. Yet, along with new opportunities, the dependence on information systems brought new threats. Intensive industry innovation and academic research continually develop new opportunities while aiming to contain the threats. The main components of information systems are computer hardware and software, telecommunications, databases and data warehouses, human resources, and procedures. The hardware, software, and telecommunications constitute information technology (IT), which is now ingrained in the operations and management of organizations.
Get a Britannica Premium subscription and gain access to exclusive content. Subscribe Now
Today throughout the world even the smallest firms, as well as many households, own or lease computers. Individuals may own multiple computers in the form of smartphones, tablets, and other wearable devices. Large organizations typically employ distributed computer systems, from powerful parallel-processing servers located in data centres to widely dispersed personal computers and mobile devices, integrated into the organizational information systems. Sensors are becoming ever more widely distributed throughout the physical and biological environment to gather data and, in many cases, to effect control via devices known as actuators. Together with the peripheral equipment—such as magnetic or solid-state storage disks, input-output devices, and telecommunications gear—these constitute the hardware of information systems. The cost of hardware has steadily and rapidly decreased, while processing speed and storage capacity have increased vastly. This development has been occurring under Moore’s law: the power of the microprocessors at the heart of computing devices has been doubling approximately every 18 to 24 months. However, hardware’s use of electric power and its environmental impact are concerns being addressed by designers. Increasingly, computer and storage services are delivered from the cloud—from shared facilities accessed over telecommunications networks.
A computer network is defined as a system that connects two or more computing devices for transmitting and sharing information. This article explains computer network in detail, along with its types, components, and best practices for 2022. Table of ContentsA computer network is a system that connects two or more computing devices for transmitting and sharing information. Computing devices include everything from a mobile phone to a server. These devices are connected using physical wires such as fiber optics, but they can also be wireless. The first working network, called ARPANET, was created in the late 1960s and was funded by the U.S. Department of Defense. Government researchers used to share information at a time when computers were large and difficult to move. We have come a long way today from that basic kind of network. Today’s world revolves around the internet, which is a network of networks that connects billions of devices across the world. Organizations of all sizes use networks to connect their employees’ devices and shared resources such as printers. An example of a computer network at large is the traffic monitoring systems in urban cities. These systems alert officials and emergency responders with information about traffic flow and incidents. A simpler example is using collaboration software such as Google Drive to share documents with colleagues who work remotely. Every time we connect via a video call, stream movies, share files, chat with instant messages, or just access something on the internet, a computer network is at work. Computer networking is the branch of computer science that deals with the ideation, architecture, creation, maintenance, and security of computer networks. It is a combination of computer science, computer engineering, and telecommunication. See More: What Is Software-Defined Networking (SDN)? Definition, Architecture, and Applications Key Components of a Computer NetworkFrom a broader lens, a computer network is built with two basic blocks: nodes or network devices and links. The links connect two or more nodes with each other. The way these links carry the information is defined by communication protocols. The communication endpoints, i.e., the origin and destination devices, are often called ports. Main Components of a Computer Network 1. Network DevicesNetwork devices or nodes are computing devices that need to be linked in the network. Some network devices include:
2. LinksLinks are the transmission media which can be of two types:
3. Communication protocolsA communication protocol is a set of rules followed by all nodes involved in the information transfer. Some common protocols include the internet protocol suite (TCP/IP), IEEE 802, Ethernet, wireless LAN, and cellular standards. TCP/IP is a conceptual model that standardizes communication in a modern network. It suggests four functional layers of these communication links:
Most of the modern internet structure is based on the TCP/IP model, though there are still strong influences of the similar but seven-layered open systems interconnection (OSI) model. IEEE802 is a family of IEEE standards that deals with local area networks (LAN) and metropolitan area networks (MAN). Wireless LAN is the most well-known member of the IEEE 802 family and is more widely known as WLAN or Wi-Fis. 4. Network DefenseWhile nodes, links, and protocols form the foundation of a network, a modern network cannot exist without its defenses. Security is critical when unprecedented amounts of data are generated, moved, and processed across networks. A few examples of network defense tools include firewall, intrusion detection systems (IDS), intrusion prevention systems (IPS), network access control (NAC), content filters, proxy servers, anti-DDoS devices, and load balancers. See More: What Is Local Area Network (LAN)? Definition, Types, Architecture and Best Practices Types of Computer NetworksComputer networks can be classified based on several criteria, such as the transmission medium, the network size, the topology, and organizational intent. Based on a geographical scale, the different types of networks are:
Based on organizational intent, networks can be classified as:
See More: Wide Area Network (WAN) vs. Local Area Network (LAN): Key Differences and Similarities Key Objectives of Creating and Deploying a Computer NetworkThere is no industry—education, retail, finance, tech, government, or healthcare—that can survive without well-designed computer networks. The bigger an organization, the more complex the network becomes. Before taking on the onerous task of creating and deploying a computer network, here are some key objectives that must be considered. Objectives of Deploying a Computer Network 1. Resource sharingToday’s enterprises are spread across the globe, with critical assets being shared across departments, geographies, and time zones. Clients are no more bound by location. A network allows data and hardware to be accessible to every pertinent user. This also helps with interdepartmental data processing. For example, the marketing team analyzes customer data and product development cycles to enable executive decisions at the top level. 2. Resource availability & reliabilityA network ensures that resources are not present in inaccessible silos and are available from multiple points. The high reliability comes from the fact that there are usually different supply authorities. Important resources must be backed up across multiple machines to be accessible in case of incidents such as hardware outages. 3. Performance managementA company’s workload only increases as it grows. When one or more processors are added to the network, it improves the system’s overall performance and accommodates this growth. Saving data in well-architected databases can drastically improve lookup and fetch times. 4.Cost savingsHuge mainframe computers are an expensive investment, and it makes more sense to add processors at strategic points in the system. This not only improves performance but also saves money. Since it enables employees to access information in seconds, networks save operational time, and subsequently, costs. Centralized network administration also means that fewer investments need to be made for IT support. 5. Increased storage capacityNetwork-attached storage devices are a boon for employees who work with high volumes of data. For example, every member in the data science team does not need individual data stores for the huge number of records they crunch. Centralized repositories get the job done in an even more efficient way. With businesses seeing record levels of customer data flowing into their systems, the ability to increase storage capacity is necessary in today’s world. 6. Streamlined collaboration & communicationNetworks have a major impact on the day-to-day functioning of a company. Employees can share files, view each other’s work, sync their calendars, and exchange ideas more effectively. Every modern enterprise runs on internal messaging systems such as Slack for the uninhibited flow of information and conversations. However, emails are still the formal mode of communication with clients, partners, and vendors. 7. Reduction of errorsNetworks reduce errors by ensuring that all involved parties acquire information from a single source, even if they are viewing it from different locations. Backed-up data provides consistency and continuity. Standard versions of customer and employee manuals can be made available to a large number of people without much hassle. 8. Secured remote accessComputer networks promote flexibility, which is important in uncertain times like now when natural disasters and pandemics are ravaging the world. A secure network ensures that users have a safe way of accessing and working on sensitive data, even when they’re away from the company premises. Mobile handheld devices registered to the network even enable multiple layers of authentication to ensure that no bad actors can access the system. See More: What Is Wide Area Network (WAN)? Definition, Types, Architecture, and Best Practices Top 10 Best Practices for Computer Network Management in 2022Network management is the process of configuring, monitoring, and troubleshooting everything that pertains to a network, be it hardware, software, or connections. The five functional areas of network management are fault management, configuration management, performance management, security management, and (user) accounting management. Computer networks can quickly become unruly mammoths if not designed and maintained from the beginning. Here are the top 10 practices for proper computer network management. Network Management Best Practices 1. Pick the right topologyNetwork topology is the pattern or hierarchy in which nodes are connected to each other. The topology can speed up, slow down, or even break the network based on the company’s infrastructure and requirements. Before setting up a network from scratch, network architects must choose the right one. Some common topologies include:
2. Document & update constantlyDocumentation of the network is vital since it is the backbone of operations. The documentation must include:
This must be audited at scheduled intervals or during rehauls. Not only does this make network management easier, but it also allows for smoother compliance audits. 3. Use the right toolsThe network topology is just the first step toward building a robust network. To manage a highly available and reliant network, the appropriate tools must be placed at the right locations. Must-have tools in a network are:
4. Establish baseline network & abnormal behaviorA baseline allows admins to know how the network normally behaves in terms of traffic, user accesses, etc. With an established baseline, alerts can be set up in appropriate places to flag anomalies immediately. The normal range of behavior must be documented at both, user and organizational levels. Data required for the baseline can be acquired from routers, switches, firewalls, wireless APs, sniffers, and dedicated collectors. 5. Protect the network from insider threatsFirewalls and intrusion prevention systems ensure that bad actors remain out of the network. However, insider threats need to be addressed as well, particularly with cybercriminals targeting those with access to the network using various social engineering ploys. One way of doing this is to operate on a least-privilege model for access management and control. Another is to use stronger authentication mechanisms such as single sign-on (SSO) and two-factor authentication (2FA). Besides this, employees also need to undergo regular training to deal with security threats. Proper escalation processes must be documented and circulated widely. 6. Use multiple vendors for added securityWhile it makes sense to stick to one hardware vendor, a diverse range of network security tools is a major plus for a large network. Security is a dynamic and ever-involving landscape. Hardware advancements are rapid and cyber threats also evolve with them. It is impossible for one vendor to be up to date on all threats. Additionally, different intrusion detection solutions use different detection algorithms. A good mix of these tools strengthens security; however, you must ensure that they are compatible and allow for common logging and interfacing. 7. Segregate the networkEnterprise networks can become large and clunky. Segregation allows them to be divided into logical or functional units, called zones. Segregation is usually done using switches, routers, and virtual LAN solutions. One advantage of a segregated network is that it reduces potential damage from a cyberattack and keeps critical resources out of harm’s way. Another plus is that it allows for more functional classification of networks, such as separating programmer needs from human resources needs. 8. Use centralized loggingCentralized logs are key to capturing an overall view of the network. Immediate log analysis can help the security team flag suspicious logins and IT admin teams to spot overwhelmed systems in the network. 9. Consider using honeypots & honeynetsHoneypots are separate systems that appear to have legitimate processes and data but are actually a decoy for insider and outsider threats. Any breach of this system does not cause the loss of any real data. A honeynet is a fake network segment for the same cause. While this may come at an additional cost to the network, it allows the security team to keep an eye out for malicious players and make appropriate adjustments. 10. Automate wherever possibleNew devices are added to systems regularly, and old ones are retired. Users and access controls keep changing frequently. All of these must be automated to ensure that human error does not occur and there are no vulnerable zombie systems in the network, costing money and security. Automation with respect to security is also crucial. It is a good practice to automate responses to attacks, including blocking IP addresses, terminating connections, and gathering additional information about attacks. See More: What Is Network Security? Definition, Types, and Best Practices TakeawayA successful network enhances productivity, security, and innovation with the least overhead costs. This comes only with robust design and implementation with a clear picture of the business needs. While network creation may purely seem like a technical endeavor, it requires business input, especially in the beginning stages. Network management also involves evolving workflows and growing and morphing with evolving technologies. Did this article help you understand computer networks in detail? Tell us on LinkedIn, Twitter, or Facebook. We would love to hear from you! MORE ON NETWORKING |