I made this widget at MyFlashFetish.com.

Cuteki Clock

Monday, July 9, 2012

network topology &transmission media

Chapter 9  Networks and Communications


Network topology

A network topology refers to the layout of the computers and devices in a communications network


Transmission media


on which data, instructions, or information travel in a communications system.The amount of data, instructions, and information that can travel over a communications channel sometimes is called the bandwidth.

Network topology
Bus Topology

Advantages
It is easy to handle and implement. 
It is best suited for small networks.


Disadvantages
The cable length is limited. This limits the number of stations that can be connected.
This network topology can perform well only for a limited number of nodes.


Ring Topology

Advantage
The data being transmitted between two nodes passes through all the intermediate nodes. A central server is not required for the management of this topology.

Disadvantages
The failure of a single node of the network can cause the entire network to fail.
The movement or changes made to network nodes affects the performance of the entire network.


Star Topology

Advantages
Due to its centralized nature, the topology offers simplicity of operation. 
It also achieves an isolation of each device in the network.

Disadvantage
The network operation depends on the functioning of the central hub. Hence, the failure of the central hub leads to the failure of the entire network.

Transmission media
advantages

  • used for long distance communication
  • high speed data transmission
  • many reciver stations can receive signal from the same sender station


disadvantages
  • very costly

Mashup

Chapter 9 Networks and Communication 


Mashup(web application hybrid)
In Web development, a mashup is a Web page or application that uses and combines data, presentation or functionality from two or more sources to create new services. The term implies easy, fast integration, frequently using open APIs and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data.
The main characteristics of the mashup are combination, visualization, and aggregation. It is important to make existing data more useful, moreover for personal and professional use. To be able to permanently access the data of other services, mashups are generally client applications or hosted online.
In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0. Mashup composition tools are usually simple enough to be used by end-users. They generally do not require programming skills and rather support visual wiring of GUI widgets, services and components together. Therefore, these tools contribute to a new vision of the Web, where users are able to contribute.

Monday, July 2, 2012

National Science Foundation Network (NSF net)

Chapter 2 Fundamentals of the World Wide Wed and Internet


National Science Foundation Network (NSF net)


The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the National Science Foundation (NSF) beginning in 1985 to promote advanced research and education networking in the United States. NSFNET was also the name given to several nationwide backbone networks that were constructed to support NSF's networking initiatives from 1985-1995. Initially created to link researchers to the nation's NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.



History

Following the deployment of the Computer Science Network (CSNET), a network that provided Internet services to academic computer science departments, in 1981, the U.S. National Science Foundation (NSF) aimed to create an academic research network facilitating access by researchers to the supercomputing centers funded by NSF in the United States.
In 1985, NSF began funding the creation of five new supercomputing centers: the John von Neumann Computing Center at Princeton University, the San Diego Supercomputer Center (SDSC) on the campus of the University of California, San Diego (UCSD), the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, the Cornell Theory Center at Cornell University, and the Pittsburgh Supercomputing Center (PSC), a joint effort of Carnegie Mellon University, the University of Pittsburgh, and Westinghouse.