Monday, August 28, 2017

LET’S TALK ABOUT – CLUSTERING COMPUTER



LET’S TALK ABOUT – CLUSTERING COMPUTER




A computer cluster consists of a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

They are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance distributed computing. They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia.


Supercomputer VS Computer Cluster

SuperComputer -> isn't a name for one particular type of computer, it's a term that refers to computers used to solve problems which require processing capabilities nearly as big as anyone can build at a given time. The types of systems that people call supercomputers change over time, supercomputers of yesteryear aren't that super any more.


Cluster Computers -> are loosely coupled parallel computers where the computing nodes have individual memory and instances of the operating system, but typically share a file system, and use an explicitly programmed high-speed network for communication. The term loosely refers to the technicalities of how such machines are constructed.



BASIC CONCEPTS
The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations.


The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network.

A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer.

Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance.



CLUSTERS ATTRIBUTES
Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations.

"Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized. Example: Amazon, Google, Facebook, Hotmail, etc. (Of course we are talking about SERVERS to “serve” web services!)

Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a computer cluster might support computational simulations of vehicle crashes or weather.

"High-availability clusters" (HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure.


BENEFITS
Clusters are primarily designed with performance in mind, but installations are based on many other factors such as:

  1. Fault tolerance (the ability for a system to continue working with a malfunctioning node) also allows for simpler scalability
  2. High performance situations
  3. Low frequency of maintenance routines
  4. Resource consolidation
  5. And, centralized management.


Other advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.


DESIGN AND CONFIGURATION
One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching grid computing.

In a Beowulf system, the application programs never see the computational nodes (called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves. In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization.  The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.

Due to the increasing computing power of each generation of game consoles, a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters. Another example of consumer game product is the Nvidia Tesla Personal Supercomputer workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).

Computer clusters have historically run on separate physical computers with the same operating system. With the advent of virtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar. The cluster may also be virtualized on various configurations as maintenance takes place. An example implementation is Xen as the virtualization manager with Linux-HA.


DATA SHARING AND COMMUNICATION
As the computer clusters were appearing during the 1980s, so were supercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied on shared memory. To date clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.

However, the use of a clustered file system is essential in modern computer clusters. Examples include the IBM General Parallel File System, Microsoft's Cluster Shared Volumes or the Oracle Cluster File System.



MESSAGE PASSING AND COMMUNICATION
Two widely used approaches for communication between cluster nodes are MPI, the Message Passing Interface and PVM, the Parallel Virtual Machine.

PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.

MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by ARPA and National Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections. MPI is now a widely available communications model that enables parallel programs to be written in languages such as C, Fortran, Python, etc. Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as MPICH and Open MPI.


CONCLUSION
After reading all this info (and complicate in somehow to understand!) let’s get some interactive info:


PLEASE, TELL ME HOW DID YOU FOUND THIS INFORMATION. IT IS HELPFUL?
LET ME KNOW YOUR COMMENTS

Monday, August 21, 2017

LET’S TALK ABOUT - THE INTERNET OF THINGS (ARE WE READY?)



LET’S TALK ABOUT - THE INTERNET OF THINGS (ARE WE READY?)

BASICS
We have been observing the evolution of the Internet of Things (IoT) for several years now. In fact, if you’ve ever worn a FitBit, used a voice recognition feature or connected your mobile phone to personal Wi-Fi, then you probably are unable to fathom everyday life without it.
If you are not familiar with the acronym IoT, it refers to the connectivity of modern chip enabled smart devices (things) that are being manufactured for everything from our homes to our work environments. In the near future, virtually all electronic devices will have 'connectivity.' So if it has a chip and access to the internet, it will have the capability to connect to all the other 'things' across the globe and share information. By some estimates, there will be 21 billion electronic devices connected to the internet by the end of the decade.
Data scientists and analysts have been predicting the future of the IoT in countless magazines, blogs and scholarly articles. Most of these authors seem to be of two opinions: some embrace the IoT for all the benefits it can offer society and some are alarmed by it, pointing to the potential dangers of these connected 'things' and their potential issues with cybersecurity.



BENEFIT
The big winners in an IoT world would appear to be consumers as organizations harvest the BIG DATA generated by these electronic devices to better serve their customers.

Think of it this way: based on your buying habits and data shared through connected appliances in your home, companies can offer you deals on new appliances when data indicates that your old one will soon need extensive service. Even further, your car could communicate with your home that you will arrive soon, indicating your house needs to make adjustments to the HVAC, open the garage and turn off the security system.

Studies has shown or reveal that the economic impact and benefits of the IoT will be huge. Economically speaking, by some estimates, the aggregated value of the IoT will approach $2.9 trillion by the end of the decade (2019-2020). Businesses that monitor your needs and are the first to offer you their products at your convenience will be even bigger winners financially.

PRIVACY & FREE SOCIETY
While some welcome the IoT, others are concerned with the loss of personal privacy. While a significant portion of our society today likes to share every detail of their lives on social media, that is, of course, something that users choose to do.

With the IoT and BIG DATA on the other hand, there is no choice as information about our intimate lives is observed, recorded and shared with countless people and organizations. Some of that information may include very sensitive details of our daily activities that may prove detrimental to our future.

Let’s do the analysis with common sense: 
  •  What if your car reported to your insurance company that you are an aggressive driver and habitual speedster?
  • Would you really want the world to know what time each day your water usage peaks or that you buy unhealthy hamburgers with a high fat content?  Could these recorded habits be brought to the attention of your healthcare provider or affect your health insurance?
  • Is your chip-enabled 'smart' TV monitoring not only the channels you watch, but also recording your conversations and also giving more information about what you like to watch? Like porn and other “mature” tv shows about religion, politics and whatever!




SOCIETY’S IOT FUTURE
The discussions concerning the merits and drawbacks of the IoT will certainly continue as it evolves, but we just aren't sure where it is taking us. Some authors foresee a more utopian society with the IoT doing its part in taking better care of our needs before we fully realize them. Others authors warn of an encroaching dystopian society where the intimate details of our lives are recorded and scrutinized, forcing individuals into a mold that is acceptable to the society of the future. What we do know is that we are rapidly headed down the technology highway at an ever increasing speed, and the IoT is simply doing its part to speed things along.

So, are we losing the battle against technology and become more machine than humans? Is this part of the next step of Industrial Revolution called Technological Evolution?


ARE YOU / WE / THEM PREPARED?




PLEASE, TELL ME HOW DID YOU FOUND THIS INFORMATION. IT IS HELPFUL?
LET ME KNOW YOUR COMMENTS 

Types of IT Support

  Types of IT Support Source: LinkedIn