DOTE

Chain And Rate

Friday, September 16, 2016

Capturing Solar Light and Transferring Energy Efficiently

Whether an electron is powering a cell phone or a cellular organism makes little difference to the electron; it is the ultimate currency of modern society and biology, and electricity is the most versatile and relevant energy available to man.

The ability to capture light and then to transfer that energy to do work are the two main steps in a photovoltaic system. In Nature, such steps happen too fast for energy to be wasted as heat and in green plants the light energy is captured by highly effective photosynthetic complexes and then transferred with almost 100% efficiency to reaction centers, where long term energy storage is initiated.

Traditional silicon-based solar PVsystems, however, do not follow Nature’s model. In Nature, the energy transfer process involves electronic quantum coherence. Indeed, this wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, as it allows the complexes to sample vast areas of phase space to find the most efficient path. Two-dimensional electronic spectroscopy investigation of the bacteriochlorophyll complex has, in fact, shown direct evidence for remarkably long-lived electronic quantum coherence. The lowest-energy exciton (a bound electron–hole pair formed when an incoming photon boosts an electron out of the valence energy band into the conduction band) gives rise to a diagonal peak that clearly oscillates. Surprisingly, this quantum beating lasted the entire 660 femtoseconds, contrary to the older assumption that the electronic coherences responsible for such oscillations are rapidly destroyed.

sunlight_absorbed_by_bacteriochlorophyll
Sunlight absorbed by bacteriochlorophyll (green)
within the FMO protein (gray) generates a wavelike motion of
excitation energy whose quantum mechanical properties can be
mapped through the use of two-dimensional electronic
spectroscopy.(Image courtesy of Greg Engel, Lawrence Berkeley
National Laboratory).

It may therefore come as no surprise that the first plastic solar cells are largely based on biomimetics, that is, on artificial photosynthesis based on human ability to gather and organize complex materials and organic molecules to replicate photosynthesis in a practical way.

Storage Network Technology

Fibre Channel supports three topologies: switched fabric, point-to-point, and arbitrated loop. Point-to-point refers to connecting two devices without the benefit of a network. Although fairly uncommon, it is sometimes used when there is no need to share devices but distance or performance is a problem. It has become less common since the advent of SCSI Ultra3 and SCSI Ultra 360.

Switched fabric, usually referred to simply as fabric, uses a Fibre Channel switch to provide full-bandwidth connections between nodes in the network. Fabrics may consist of one or many switches, depending on scale, availability, and cost considerations. Fibre Channel fabrics also provide other network services. Switches implement naming, discovery, and time services as part of the fabric. This is different from many other network architectures. Unlike Ethernet or IP, these are required to be implemented in the switch. DNS for IP networks does not require that it be part of a switch or router and is often a separate device. In Fibre Channel fabrics, these services are integrated into the fabric.

The terms fabric and switch are often used interchangeably. They are not the same and should not be used as such. Fabric refers to the topology and services, but not a device. A switch is a network device that implements the fabric.

The third Fibre Channel topology is called Fibre Channel Arbitrated Loop, or just loop. Arbitrated loops were conceived of as an inexpensive way to implement a SAN by eliminating the relatively expensive switch, along with its integrated services. In an Arbitrated Loop, all nodes are connected in a loop, with frames passing from one node to the next. All nodes on the loop share the available bandwidth, which inhibits the scalability of Arbitrated Loop. The more nodes that are transmitting on the network, the less bandwidth is available for any individual node. As fabric switches have become less costly, Arbitrated Loop has fallen from favor. It is used mostly inside storage devices such as arrays.


Fibre Channel Addressing

All Fibre Channel nodes carry an address called a World Wide Name (WWN). The WWN is a unique 64-bit identifier. Much like an Ethernet MAC address, part of the WWN is unique to manufacturer of the equipment, and the rest is usually a serialized number. However it is created, the WWN is unique and specific to a physical port.
the_IO_stack
The I/O Stack
A 64-bit address is very large. Needng to have a frame carry two of these one for the source and one for the destination makes routing packets between ports cumbersome. To combat this problem, Fibre Channel also uses an alternative addressing scheme within Fibre Channel switched fabrics. Each port is assigned a 24-bit port address within the fabric when the port logs in. There are two advantages to this. One, it is faster to route packets on a smaller address (and takes less processor time). Second, the addresses are dynamically assigned and managed by the fabric operating system. This also makes it easy and faster for the OS to deal with changes in the fabric.

Changing the World Wide Name
Although it would seem that the WWN is immutable, there are some instances in which it can be changed. WWNs are often placed in nonvolatile memory (NVRAM) and as such can be changed, given the right utility. A utility such as this would need to be available at boot time, before the port was fully initialized. In the early days of Fibre Channel, it was not uncommon to find host bus adapters with this capability.
Why would anyone want to do such a thing? Having the same WWN would have a similar effect to having the same MAC address in an Ethernet environment. At least one of the devices would not be able to log into the network, or devices would become confused as to the origin of a frame.
The reason this facility sometimes exists is that the NVRAM that holds the World Wide Name can become corrupted, making the port unusable. This used to happen with host bus adapters placed in poorly shielded computers. It is a dangerous utility to have around and should be locked away, where no one can get at it.
Some devices do this on purpose. In certain multiport devices, some ports are kept offline in case a port fails. If one does, the spare port becomes active with the original port's WWN. This makes it look like the original port to the network. This method of failover has the advantage of not requiring hosts or applications to do anything. On the other hand, I/O is usually lost, and some applications may fail during the changeover.
storage_area_network
Storage Area Network

Extending SANs over MAN and WAN

SANs based on Fibre Channel are isolated installations. Because Fibre Channel is a switched, not routable, protocol, it cannot be routed over a wide area network or metropolitan area network. There are many good reasons to want to connect SANs over a long distance, data protection being chief among them. Copying blocks of data in near real time over a distance is a key component of data protection. Fibre Channel can reach to 2 kilometers, which is enough to get across a campus or over a river. However, doing so requires the laying or leasing of a dedicated fiber optic cable. That can be extremely costly. The solution is to find ways to interact with public networks available from telecommunications providers. That provides a balance between cost and function.

There are several ways to extend a SAN beyond its own cable limits by using public networks. The most popular is to change the transport by using the IP network to carry an FC frame. A protocol called FC-IP was developed to do this. The FC frame, including its payload, is encapsulated in an IP packet. The frame can now be routed over a public network. At its destination, the FC frame is stripped out and placed once again onto the Fibre Channel network. There are several SAN appliances that perform this function, as well as blades that integrate into Fiber Channel switches. A similar protocol called FC-BB sends FC frames over SONET.

Another way to get FC packets over a WAN or MAN is to crack open the frame; pull out the data; and place it in another type of packet, such as iSCSI. Converting to and from another type of protocol can be processor intensive and can lead to interesting address-mapping issues, but several products do this.

Finally, raw bits can be sent over an optical network. Optical switches convert electrical signals to wavelengths of light and send them out over a fiber optic cable. Several optical switches have blades available that convert physical Fibre Channel signals to optical and then send them over optical fiber. Flow control and other network functions have to be handled by the Fibre Channel switch. The optical network basically acts as very long, fat, fast cable.
Extending IP SANS
The problems of routing FC frames do not exist for IP SANs. IP SANs use a routable transport protocol, and IP switches already have interfaces for WANs and MANs. What remains to be seen is whether applications can tolerate the latency inherent in a WAN or MAN.

Key Points 👍

  • Direct Attach Storage was the original storage architecture. The term DAS came later. One of the first disk systems was called Direct Access Storage Devices, or DASD, and was popular in the IBM mainframe environment. The choices of storage architecture changed in the 1990s with the introduction of Network Attached Storage (NAS) and Storage Area Networks (SAN).
  • Hard drives are the primary online storage media, with high speed and high capacity. Tape, CD-ROM/RW, DVD-ROM/RW, and magneto-optical systems are used mainly for backup and archive, as well as software distribution.
  • Aggregation into large systems provides benefits in speed, logical capacity, and data protection. Removable media libraries and jukeboxes reduce the chance of error, increase availability, and allow multiple computers to access different media simultaneously.
  • When data is accessed directly as blocks, it is called block I/O. If it is accessed through a file system, it is referred to as file I/O.
  • RAID is a way of increasing performance and data protection by writing and reading data to multiple disks at the same time. Major RAID functions including striping, the writing of different data to many disks simultaneously, and mirroring, which is the writing of the same data to several disks.
  • SCSI is a high-performance standard for transferring data to and from devices. It is used extensively for mass storage, and it encompasses both hardware specifications and a software protocol.
  • ATA is the most popular storage technology today. It is used in most desktop and laptop computers. It is an in-the-box technology, almost never used to attach storage externally. There is a new serial implementation called SATA or Serial ATA.
  • Network Attached Storage (NAS) devices are highly optimized file servers. They use standard protocols to communicate with a large number of clients. They provide high performance, can be quite scalable, and are easy to install and inexpensive to maintain. NAS uses a file head, sometimes called the NAS head, to provide a file system, management, and an interface to the network.
  • A SAN is a storage architecture that performs block I/O over a network. SANs have advantages over DAS in terms of distance capabilities, address space, the ability to support many-to-many device configurations, better cable plan management, greater scalability, and higher availability.
  • Fibre Channel (FC) is a high-speed, low-latency technology that marries networks with I/O channels. It is often used for SANs. Fibre Channel supports three topologies: fabric, arbitrated loop, and point-to-point. Fabric is the most common and allows for full-bandwidth connections between all nodes in the network. It also implements naming, discovery, and time protocols as part of the fabric.

Monday, August 8, 2016

Covert Channel over Cellular Voice Channel in Smartphones

Network covert channels represent a significant problem due to their security implications. Thus many research efforts have been focused on their identification, detection, and prevention. Covert channel identification is the process of discovering a shared resource that might be utilized for covert communication.

A research on this topic contributes to the field by identifying a new network covert channel in smartphones. Smartphones are always connected to the cellular network; however, little effort has been directed at investigating potential security threats with its covert communication. Previously, the
cellular voice channel had never been used to launch such attacks. This service was designed to carry audio only. Thus cellular service providers have not applied any information security protection systems, such as firewalls or intrusion detection systems, to guard cellular voice channel traffic in the cellular network core.

Thus these channels are a prime choice over which to attempt a covert channel. Theoretically, this channel could be employed in smartphones to conduct multiple covert malicious activities, such as sending commands, or even leaking information. As there are some past research that studied modulating data to be “speech-like” and transmitting it through a cellular voice channel using a GSM modem and a computer. In addition to the fact that smartphone hardware designers introduced a new
smartphone design that provides higher-quality audio and video performance and longer battery life, the new design allows smartphone applications to reach the cellular voice stream. Thus information in the application could be intentionally or unintentionally leaked, or malware could be spread through the cellular voice stream.

This could be accomplished by implementing a simple audio modem that is able modulate date to be “speech-like” and access the cellular voice stream to inject information to smartphones’ cellular voice cannel. This covert channel could be accompanied with rootkit that alters phone services to hide the covert communication channels. To investigate the potential threats with this covert channel, Android security mechanisms were tested and it was demonstrated that it is possible to build an Android persistent user-mode rootkit to intercept Android telephony API calls to answer incoming calls without the user or the system’s knowledge. The developed modem along with the rootkit successfully leaked data from the smartphone’s application and through cellular voice channel stream by carrying modulated data with a throughput of 13 bps with 0.018% BER.

LITERATURE REVIEW
The covert channel concept was first presented by Lampson in 1973 as a communication channel that was neither designed nor intended for carrying information. A covert channel utilizes mechanisms that are not intended for communication purposes, thereby violating the network’s security policy. Three key conditions were introduced that help in the emergence of a covert channel: 

  1. A global shared resource between the sender and the receiver must be present, 
  2. The ability to alter the shared resource, 
  3. A way to accomplish synchronization between the sender and the receiver. 


The cellular voice channel has all three conditions, making it an ideal channel for implementing a covert channel. Network covert channel field research currently focuses on exploiting weaknesses in common Internet protocols such as TCP/IP, HTTP, VoIP, & SSH to embed a covert communication. In the cellular network field, it has been demonstrated that high capacity covert channels in SMS can be embedded and used as a data exfiltration channel by composing the SMS in Protocol Description Unit (PDU) mod. Steganographic algorithms introduced to hide data in the context of MMS to be used in on-time password and key communication. Cellular voice channel in smartphones has been attempted so far recently.

As smartphones are trending to increase their computational capabilities, employees and individuals increasingly rely on smartphones to perform their tasks, and as a result smartphone security becomes more significant than ever before. One of the most serious threats to information security, whether within organization or individual, is covert channels, because they could be employed to leak sensitive information, divert the ordinary use of a system, or coordinate attacks on a system.

Therefore, identification of covert channels is considered an essential task. The research takes a step in this direction by identifying a potential covert channel which could affect smartphone security. It provides a proof of concept of the ability to use the cellular voice channel as a covert channel to leak information or distribute malware. It introduces details of designing and implementing the system and the challenges and constraints that have been faced to accomplish the system. It has been realized during the research that as smartphone hardware and software designs have changed recently. This new smartphones’ design is adopted by multiple companies, and thus new smartphones are being released that use this design without considering the security vulnerability.

covert_channel_smartphone
The right screen shows when the attacker made a call to the victim, and in the left screen the rootkit in the hacked phone recognized the attacker’s caller ID and based on that it answered the call without showing up on the victim’s screen

The research also proves that communication between the AP and the BPs is vulnerable to attack in Android OS. In addition, it discusses some of the Android security mechanisms that were easily bypassed to accomplish the mission. The paper illustrates some discovered flaws in Android application architecture that allow a break in significant and critical Android operations.

Sunday, July 24, 2016

The Need for a Business Model in Software Engineering

Software engineering faces several dilemmas. It has comprehensive goals, but limited tools. It demands broad perspectives, but depends on narrowly focused practitioners. It places a high premium on quality, but often has insufficient inputs to its problem-solving process. As a field, software engineering has yet to define theories and frameworks that adequately combine the disciplines of software and hardware technology with related business and social science disciplines to attack real-world problems optimally.

Despite advances, software engineering tends to remain code driven and is burdened in testing for bugs, program errors, and verification, even though reusable objects, reusable applications, and CASE tools have long been available. The engineering of software entails inviting software technology to help tackle human problems rather than just shoehorning human problems into a software solution. This requires reordering the relation between people and computers; computer programs are understood to play an important but limited role in problem-solving strategy. Such an approach to software engineering would still be software driven in the sense that it was driven by the need to develop software for automated as opposed to manual problem solving; however, it would view problems and evaluate solutions from a broadly interdisciplinary perspective in which software was understood and used as a tool.

software_business_model


Requirements engineering is supposed to address the problem part of software engineering, but it is part of the traditional view that looks at the problem-solving process as a phase in the software development life cycle, rather than at the software development life-cycle as part of the problem-solving process. The software development life cycle never ends with a solution, but only with a software product. Although one may assume that a software product should be the solution, in practice this never happens because software systems are only part of a total organizational context or human system; one cannot guarantee that these solutions are effective independently of their context.

Implications of the New Business Model
The following consequences result when one refocuses from engineering software for the sake of the technological environment to engineering software for people’s sake:

  • Solutions will evolve only from carefully understood problems. The resulting solutions will be guided by their originating problems and considered successful only if they are able to solve those problems. The solution is never solely the software product, but everything needed to solve the problem.
  • Problems will not be defined in terms of what the people want the software to do for them. Problem definition will address the relevant human needs regardless of the role of the software in meeting those needs. Subsequent to an interdisciplinary definition of the problem, an interdisciplinary solution will be proposed that will utilize the available, relevant human, financial, informational, technological, and software resources.
  • The iterative software development process will become part of the synchronized business process and will in turn deliver business process total solutions. Thus, the business process will shape the software process in terms of its goals, metrics, and requirements.

Monday, May 9, 2016

Extracting Information from Data

How do you figure out what portion of what you have captured is useful to your investigation? What happens if you can’t find what you are looking for? These are some of the questions that run through the mind of every forensic investigator. After the data is imaged, the forensic examiner can search and index all contents of the drive without changing or modifying the data, thereby preserving the evidence. But what if the evidence is missing? Criminals or intruders can use programs to delete email, pictures, and documents. Trained forensic investigators must have tools available that will help them recover this information and help them prepare the evidence for presentation.

You’ll look at the process of divining the information you need from the data you have captured. You’ll study the process of analyzing and organizing the information you have gathered. You’ll learn when to grab the low-hanging fruit and when to dig deeper for data that may or may not exist. You’ll study the various types of hidden and trace evidence. Finally, you’ll move on to preparing and presenting evidence.

What Are You Looking For?

For a long time he remained there, turning over the leaves and dried sticks, gathering what seemed to me to be dust into an envelope and examining with his lens not only the ground, but even the bark of the tree as far as he could reach.
Dr. Watson on Sherlock Holmes

Finding what you are looking for in a computer forensics investigation can be likened to the preceding quote. There are so many places to look because operating systems vary, application programs vary, and storage methods differ. Computer evidence is almost never isolated. It is a result of the stored data, the application used to create the data, and the computer system that produced the activity. Systems can be huge and complex, and they can change rapidly. Data can be hidden in several locations. After you find it, you may have to process it to make it humanly readable.

Begin the discovery process by installing the disk in your analysis system and boot the system using a boot disk. Be careful not to damage the disk when you connect the disk to the interfaces. Next, identify the partitions on the drive using Partition Utility. Exercise caution when you use the utility; you don’t want to risk modifying the partition table or disk label. In fdisk, you should select the Display Partition Information option to view the name/number, volume label, size, and filesystem associated with every partition on the hard disk. When you are ready to start examine the imaged data, you’ll have many places to explore.

Internet Files
To determine what it is you are looking for, you must first determine the type of intrusion or potential crime and the appropriate response. Let’s start with a case that would involve the Internet and pictures. For example, an employee is suspected of illegally accessing and downloading pictures of proprietary designs from a competitor’s internal website and using these designs in his own work. Due to the nature of the business, this is a serious offense and you have been called to investigate. After your imaged drive is ready to be examined, open your forensic software and start a case.

forensic_software_example_1
Example Of Forensic Software
When a user logs on to a operating system for the first time, a directory structure is created to hold that individual user’s files and settings. This structure is called the profile, and it has a directory that is given the same name as the user. This profile contains several folders and files. Because this case involves searching for images that were downloaded from the Internet, you can begin by adding evidence from the folders where these files may be stored.

forensic_software_example_1
Example Of Using Forensic Software
Before a browser actually downloads a web page, it looks in the Temporary Internet Files folder to see if the information is already there. This is done to increase the speed at which the page will load. Web browsers cache the web pages that the user recently visited. This cached data is referred to as a temporary Internet file, and it is stored in a folder on the user’s hard drive. All of the HTML pages and images are stored on the computer for a certain amount of time, or they are deleted when they reach a certain size.

Sometimes, while a user is viewing web pages, other pages pop up at random. These pop-ups can result in files being written to a user’s hard disk without their knowledge. For example, many hacker sites have Trojan horses that automatically download objectionable material (that is, files) to an unsuspecting user’s computer without the user’s knowledge. The following illustration shows how the information in the Temporary Internet Files folder can be viewed through forensic software.

Besides the temporary Internet files, you may also find evidence in the History folder. The History folder contains a list of links to web pages that were visited. The History feature in Internet Explorer has an option for how long the list of visited websites should be kept. The default setting is 20 days. Computer-savvy people often change this default setting to a shorter period, or they click the Clear History button to erase where they have been before they log off the computer.

The Cookies folder is similar to the History folder. It holds cookies or information stored by Internet sites that were visited by the user. A number of utilities that work with forensic software display the contents of a cookie in an easily readable format. One such utility is CookieView, which you can download from Internet.

Many applications create temporary files when the application is installed and when a file is created. These files are supposed to be deleted after the application is installed or when you close the document, but sometimes this doesn’t happen. For example, each time you create a document in Microsoft Word, the software creates a temporary file (with a .tmp extension). Temporary files can possibly provide some useful evidence.

If, during your investigation of the computer, you find no history files, temporary Internet files, or temporary files in the expected folders, you can assume the data has been stored somewhere else so you’ll need to dig deeper. Here are some file types you may want to look for:
  • Files with strange locations
  • Files with strange names
  • Filenames with too many dots, or that start with a period (.) and contain spaces
  • Files that have changed recently
MACtime is a common forensic tool that is used to see what someone did on a system. It creates an ASCII timeline of file activity. Other various tools are also available. You can use X-Ways Trace to analyze a drive to locate information about Internet-related files. Such tools can be very useful in gathering evidence (such as the site visited, last date visited, and cache filename).

But, after all explanation above, I suggest you'd better using : Kali Linux. ...'^_^

Tuesday, January 26, 2016

Intelligent Agents : An Overview

An intelligent agent (IA) is a computer program that helps a user with routine computer tasks. It performs a specific task based on predetermined rules and knowledge stored in its knowledge base. IAs are a powerful tool for overcoming the most critical limitation of the Internet information overflow and making electronic commerce a viable organizational tool.

DEFINITIONS

The term agent is derived from the concept of agency, referring to employing someone to act on your behalf. A human agent represents a person and interacts with others to accomplish a predefined task.

The concept of agents goes surprisingly far back. More than 50 years ago, Vannevar Bush envisioned a machine called a memex. In his mind, he could see the memex assisting humans through huge fields of data and information. In the 1950s, John McCarthy developed Advice Taker, a software robot that would navigate the networks of information that were expected to develop in time. Advice Taker's similarity to today's agents is amazing. Given a task by a human user, the robot takes the necessary steps or asks for advice from the user when it gets stuck.

The futuristic prototypes of intelligent personal agents, such as Apple's Phil and Microsoft's Bob, perform complicated tasks for their users following the same functions laid out by McCarthy in Advice Taker. The modern approach to intelligent agents moved to mobile and multiple agents in the mid 1980s under research topics like distributed artificial intelligence (Bond and Gasser, 1988) and agency theory.

During the development process, several names have been used to describe intelligent agents, including software agents, wizards, software demons, knowbots, and softbots (intelligent software robots). These terms sometimes refer to different types of agents or agents with different intelligence levels.

Demons were a popular term for agents in the early stage of development. A demon is a small computer program that runs in the background and takes actions to alert the user of a certain situation when a prespecified condition is met. An example is the X windows program xbiff. This program continually monitors a user's incoming e-mail, and indicates via an icon whether there are any unread messages. Virus-detection agents and incoming-e-mail agents are similar examples. Recently, the term bot has become a common substitute for the term agent. Bot is an abbreviation for robot. Bots are given specific prefixes indicating their use. Typical bots are chatterbots, docbots, hotbots, jobbots, knowbots, mailbots, musicbots, shopbots, spiderbots, spambots, and sexbots (of course it had to be).

There are several definitions of what an intelligent agent is. Each definition explicates the definer's perspective. Here are some examples:
  • Intelligent agents are software entities with some degree of independence or autonomy that carry out some set of operations on behalf of a user or another program and in so doing employ some knowledge or representation of the user's goals or desires.
  • Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed.
  • Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment, action to affect conditions in the environment, and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions.
  • A software implementation of a task in a specified domain, on behalf or in lieu of an individual or other agent. The implementation will contain homeostatic goal(s), persistence, and reactivity, to the degree that the implementation will persist long enough to carry out the goal(s), and will reach sufficiently within its domain to allow the goal(s) to be met and to know that fact.
  • An agent is a computer system that is situated in some environment and is capable of autonomous action in this environment in order to meet its design objectives.


INTELLIGENCE LEVELS

Intelligence is a key feature related to defining intelligent agents because it differentiates them from ordinary agents. Wooldridge (2002) suggested that intelligence in this sense possesses the following features:
  • Reactivity. Intelligent agents are able to perceive their environment and respond in a timely fashion to changes that occur in it in order to satisfy their design objectives.
  • Proactiveness. Intelligent agents are able to exhibit goal-directed behavior by taking the initiative in order to satisfy their design objectives.
  • Social ability. Intelligent agents are capable of interacting with other agents (and possibly humans) in order to satisfy their design objectives.

There are four different levels of agent intelligence, as follows:
Level 0 (the lowest). These agents retrieve documents for a user under straight orders. Popular Web browsers fall into this category. The user must specify the URLs where the documents are. These agents help in navigating the Web.
Level 1. These agents provide a user-initiated searching facility for finding relevant Web pages. Internet search agents such as Google, Alta Vista, and Infoseek are examples. Information about pages, titles, and word frequency is stored and indexed. When the user provides key words, the search engine matches them against the indexed information. These agents are referred to as search engines.
Level 2. These agents maintain user profiles. Then they monitor Internet information and notify the users whenever relevant information is found. An example of such agents is WebWatcher, the tour-guide agent for the Web developed at Carnegie Mellon University. Agents at this level are frequently referred to as semi-intelligent or software agents.
Level 3. Agents at this level have a learning and deductive component of user profiles to help a user who cannot formalize a query or specify a target for a search. DiffAgent (Carnegie Mellon University) and Letizia (MIT) are examples. Agents at this level are referred to as learning agents or truly intelligent agents.

COMPONENTS OF AN AGENT

Intelligent agents are computer programs that contain the following components:

  • Owner. User name, parent process name, or master agent name. Intelligent agents can have several owners. Humans can spawn agents, processes can spawn agents (e.g., stock brokerage processes using agents to monitor prices), or other intelligent agents can spawn their own supporting agents. .
  • Author. Development owner, service, or master agent name. Intelligent agents can be created by people or processes and then supplied as templates for users to personalize.
  • Account. Intelligent agents must have an anchor to an owner's account and an electronic address for billing purposes or as a pointer to their origin.
  • Goal. Clear statements of successful agent task completion are necessary, as well as metrics for determining the task's point of completion and the value of the results. Measures of success can include simple completion of a transaction within the boundaries of the stated goal or a more complex measure.
  • Subject description. The subject description details the goal's attributes. These attributes provide the boundaries of the agent, task, possible resources to call on, and class of need (e.g., stock purchase, airline ticket price).
  • Creation and duration. The request and response dates requested.
  • Background. Supporting information.
  • Intelligent subsystem. An intelligent subsystem, such as a rule-based expert system or a neural computing system, provides several of the characteristics described above.

Tuesday, January 19, 2016

Protecting Yourself Againts DNS Distributed Denial of Service (DDoS) Attacks

Distributed Denial of Service via DNS (DNS DDoS) is now a common network traffic attack used by various malicious actors to negatively impact business or agency operations. DNS DDOS attacks are designed to bring down DNS servers and consume network bandwidth thereby impacting critical IT applications (e.g. email, web transactions, VoIP, SaaS). For target businesses, there are two typical roles to a DNS DDoS attack: victim and accomplice. Using best practices for DNS cinfiguration and operation, you reduce your risk of being impacted by a DNS DDoS attack or being used in one.

Avoid being a victim
To avoid being a victim of a DNS DDoS attack, you must understand the components of the attack and have a plan to mitigate them. While you can never completely eliminate or mitigate DNS DDoS attacks, you can take measures to survive them and keep critical applications running. Below are some points to temper the impact of a DNS DDoS attack on your IT infrastructure :

- Over-provision DNS Servers
- Build-in High Availability
- Set Response Rate limit by Source IP Address
- Set Response Rate Limit by Destination IP Address
- Use Cloud-based Anycast Secondary Servers

Don't be an Accomplice
The Flip side of a DNS DDoS attack is the accomplice who unwillingly amplifies the attack with their DNS infrastructure. Being an accomplice in a DNS DDoS attack, while not as devastating as being the target, still impacts DNS services, network bandwidth and leaves the door open to possible litigation due to weak IT control. Simple best-practices configuration will help reduce potential litigation:

- Close your 'Open' DNS Recursive Server
- Rate Limit Responses from Authoritative Name Servers

Saturday, May 9, 2015

Why We are Entering the Solar Age

With concerns about rising oil prices and climate change spawning political momentum for renewable energy, solar electricity is poised to take a prominent position in the global energy economy. However, claiming that we are ready to enter the solar age, when the global consumption of oil is steadily on the rise may sound as a green-minded false prophecy. All predictions of an oil peak made in the 1960s were wrong : We never experienced lack of oil as was doomed inevitable after the 1973 oil shock, and the exhaustion of world oil reserves is a hotly debated topic. For example, an oil industry geologist tackling the mathematics of Hubbert’s method suggests that the oil peak occurred at the end of 2005.

predicted_scenario_solar_age
Predicted scenario by 2040. Out of a total electricity consumption of 36.346TWh (from 15.578TWh in 2001, IEA) renewable energy sources will cover 29.808 TWh, with solar energy becoming largely predominant. (Source: EREC).
On the other hand, the price of oil has multiplied by a factor of 10 in the last few years, whereas in the US, for example, domestic petroleum now returns as little as 15 joules for every joule invested compared to the 1930s when the energy return on energy invested (EROI) ratio was 100.

The demand for oil has boomed in concomitance with globalization and rising demand from China and India. In China and India, governments are managing the entrance to the industry job market of some 700 million farmers, that is about twice the overall amount of workers in the European Union. Global energy demand will more than double by 2050 and will triple by the end of the century. At the same time, an estimated 1.64 billion people, mostly in developing countries, are not yet connected to an electric grid.

Finally, the world’s population is rapidly learning that climate change due to human activities is not an opinion: it is a reality that in the US has already hit entire cities (New Orleans, 2005), and in southern Europe hurt people and the whole ecosystem with temperatures close to 50°C in mid-June 2007. Overall, these economic, environmental and societal critical factors require us to curb CO2 emissions soon, and switch to a massive scale use of renewable materials and renewable energy (RE) until the day when cheap and abundant solar energy becomes a reality.

Access to affordable solar energy on a large scale, admittedly, is an enormous challenge given that presently only 0.2% of global energy is of solar origin. The low price of oil in the 1990s ($10–$20 a barrel) put a dampener on scientific ingenuity for the whole decade, since many developments were put on the shelf until a day in the future when their use would become “economically viable.” All is rapidly changing with booming oil prices. Solar electricity generation is now the fastest-growing electricity source, doubling its output every two years. The solar energy market has grown at a rate of about 50% for two years, growing to 3800MW in 2007 from 2521MW in 2006.

solar_modul_production
PV Technology evolution. (Source: EPIA, 2004).

Thursday, February 19, 2015

Multimedia Of IP Networks: Layered Multicast. (A Review)

Unlike stand-alone multimedia applications, in which the multimedia contents are originated and displayed on the same machine, multimedia networking has to enable multimedia data that originate on a source host to be transmitted through the IP networks (the Internet) and displayed at the destination host. The Internet, which uses IP protocols and packet switching, has become the largest network of networks in the world (it consists of a combination of manywide area and local area networks, WANs and LANs.

Multimedia over the Internet is fast growing among service providers and potential customers. Owing to the special requirements of audio and video perception, most existing and emerging real-time services need a high level of quality and impose great demands on the network. Real-time multimedia applications (e.g., live video streaming and video conferences), which are very sensitive to transmission delay and jitter and usually require a sufficiently high bandwidth, are a good example. To this end, various systems have been made available on network protocols and architectures to support the quality of service (QoS), such as the integrated services (IntServ) and differentiated services (DiffServ) models. Multiprotocol label switching (MPLS) is another technique often mentioned in the context of QoS assurance, but its real role in QoS assurance is not exactly the same as that of the IntServ and DiffServ models.

Layered Internet protocol (IP)
The IP protocol is the set (suite) of communications protocols that implement the protocol stack on which the Internet and most commercial networks run. It has also been referred to as the TCP/IP protocol, since two of the most important protocols in it are the transmission control protocol (TCP) and the Internet protocol (IP), which were also the first two networking protocols defined. The IP protocol can be viewed as a set of layers in which each layer solves a set of problems involving the transmission of data; generally, a protocol at a higher level uses a protocol at a lower level to help accomplish its aims. The IP suite uses encapsulation to provide abstraction of protocols and services. The upper layers are logically closer to the user and deal with more abstract data, relying on lower-layer protocols to translate the data into forms that can eventually be physically transmitted. The IP protocol is now commonly accepted as a top-down five-layer model, having application, transport, network, data-link, and physical layers.

Layered multicast of scalable media

With the fast deployment of Internet infrastructure, wired or wireless, the IP network is getting more and more heterogeneous. The heterogeneity of receivers under an IP multicast session significantly complicates the problem of effective data transmission. A major problem in IP multicast is the sending rate chosen by the sender. If the transmitted multimedia data rate is too high, this may cause packet loss or even a congestion collapse whereas a low transmission data rate will leave some receivers underutilized.

This problem has been studied for many years and is still an active research area in IP multicast. To solve this issue, the transmission source should have a scalable rate, i.e., multirate, which allows transmission in a layered fashion. By using multirate, slow receivers can receive data at a slow rate while fast receivers can receive data at a fast rate. In general, multirate congestion control can perform well for a large multicast group with a large number of diverse receivers. This brings us to the scheme of layered multicast.

Basically, layered multicast is based on a layered transmission scheme. In a layered transmission scheme, data is distributed across a number of layers which can be incrementally combined, thus providing progressive refinement. The scalable video coding (SVC) can easily provide such layered refinement. Thus, the idea of layered multicast is to encode the source data into a number of layers. Each layer is disseminated as a separate multicast group, and receivers decide to join or leave a group on the basis of the network condition). The more layers the receiver joins, the better quality it gets. As a consequence of this approach, different receivers within a session can receive data at different rates. Also, the sender does not need to take part in congestion control.

layered_multicast_of_multimedia_networks
Layered Multicast Of Multimedia Networks

To avoid congestion, end systems are expected to be cooperative by reacting to congestion and adapting their transmission rates properly and promptly. The majority traffic in the Internet is best-effort traffic. The transport control protocol (TCP) traffic uses an additive-increase multiplicative-decrease (AIMD) mechanism, in which the sending rate is controlled by a congestion window. The congestion window is halved for every window of data containing a packet drop and increased by roughly one packet per window of data otherwise. Similarly, IP multicast for UDP traffic needs a congestion control algorithm. However, IP multicast cannot simply adopt the TCP congestion control algorithm because acknowledgements can cause an “implosion problem” in IP multicast. Owing to the use of different congestion control algorithms in TCP and multicast, the network bandwidth may not be shared fairly between the competing TCP and multicast flows. Lack of an effective and “TCP friendly” congestion control is the main barrier for the wide-ranging deployment of multicast applications. “Scalability” refers to the behavior of the protocol in relation to the number of receivers and network paths, their heterogeneity, and their ability to accommodate dynamically variable sets of receivers. The IP multicasting model provided by RFC 1112 is largely scalable, as a sender can send data to a nearly unlimited number of receivers. Therefore, layered multicast congestion control mechanisms should be designed carefully to avoid scalability degradation.

Thursday, January 8, 2015

Actively Reserved Bandwidth Architecture

The MPLS architecture has some shortcomings. First, it will lead to the heavy load of edge-LSRs due to calculating the path of traffic in the MPLS domain, when traffic is increasing considerably. Second, it spends time to establish a path. When network resource is insufficient, it will cost more time to re-establish a path. Therefore, proposed a new architecture, called Actively Reserved Bandwidth Architecture, to improve the original MPLS architecture.

In the Actively Reserved Bandwidth Architecture, each core-LSR reserves some bandwidth for every edge-LSR. These core-LSRs construct a path that occupies the bandwidth for the edge-LSR. While a traffic flow requests an edge-LSR for transmission, the edge-LSR can quickly find a path with sufficient bandwidth to transmit it without calculating and establishing the path in the MPLS domain. This reduces the chance of failure in establishing a path and decreases the load in edge-LSR. Therefore, this approach is good for transmitting multimedia traffic due to the serious delay limitation. Sometimes, an edge-LSR requests the bandwidth exceeding the reserved capacity. The edge-LSR uses the original architecture in calculating and establishing a path to transmit the traffic flow in the MPLS domain. This may use other edge-LSR's reserved bandwidth in some core-LSRs. When the situation occurs, these core-LSRs signal the edge-LSR whose reserved bandwidth is occupied. And, the core-LSRs cut down the reserved bandwidth for the edge-LSR.

Advantages
The advantages of Actively Reserved Bandwidth Architecture are pointed out as follows:
  • To reduce the need of establishing a path. If the reserved bandwidth is sufficient, the edge-LSR has no need to establish a path.
  • To decrease the probability of failure in establishing a path. When the reserved bandwidth is occupied by other edge-LSR, the core-LSRs will actively notify the edge-LSR.
  • To be suitable for real-time multimedia traffic. Establishing paths results in time delay.
According to the feature of MPEG-4 video, treating one VOP as a unit of transmission is more efficient than treating one packet as a unit of transmission. Actively Reserved BandwidthArchitecture improves CR-LDP in the MPLS domain. This architecture achieves better results and is more suitable for multimedia traffic, such as MPEG-4 video. The future research on Actively Reserved BandwidthArchitecture is expected. Due to dynamic network traffic on Internet, the core-LSR hardly reserves definite bandwidth for the edge-LSR to handle all cases. Many studiesemphasized the traffic engineering of MPLS, which is an important topic for further research. Besides, there is a trend in internet router for differentiating the type of incoming network traffic. This makes flow-orientedcontrol easier in LSR, and also supports QoS mechanisms. It is necessary to have an optimal design of bandwidth reservation in core-LSRs for the edge-LSR.

Tuesday, December 23, 2014

How Solar Cells Work

­­You've probably seen calculators that have solar cells -- calculators that never need batteries, and in some cases don't even have an off button. As long as you have enough light, they seem to work forever. You may have seen larger solar panels -- on emergency road signs or call boxes, on buoys, even in parking lots to power lights. Although these larger panels aren't as common as solar powered calculators, they're out there, and not that hard to spot if you know where to look. There are solar cell arrays on satellites, where they are used to power the electrical systems.

Yo­u have probably also been hearing about the "solar revolution" for the last 20 years -- the idea that one day we will all use free electricity from the sun. This is a seductive promise: On a bright, sunny day, the sun shines approximately 1,000 watts of energy per square meter of the planet's surface, and if we could collect all of that energy we could easily power our homes and offices for free.

Examine solar cells to learn how they convert the sun's energy directly into electricity. In the process, you will learn why we are getting closer to using the sun's energy on a daily basis, and why we still have more research to do before the process becomes cost effective.

Wednesday, December 17, 2014

Information Warfare of the Future: Monitoring Everything

Wireless systems capable of monitoring vehicles and people all over the planet (basically everything) are leaving businesses and the military aglow with new possibilities, and some privacy advocates deeply concerned. Companies seeking to tap the commercial potential of these technologies are installing wireless location systems in vehicles, hand-held computers, cell-phones (even watchbands). Scientists have developed a chip that can be inserted beneath the skin, so that a person’s location can be pinpointed anywhere.

Various plans already under way include alerting cell-phone users when they approach a nearby a store place, telling them which items are on sale, or sending updates to travelers about hotel vacancies or nearby restaurants with available tables. Another company may be provide parents with wireless watchbands that they can use to keep track of their children. Although the commercial prospects for wireless location technology may be intriguing, and the social benefits of better mobile emergency service are undisputed, privacy-rights advocates are worried. By allowing location-based services to proliferate, you’re opening the door to a new realm of privacy abuses. What if your insurer finds out you’re into rock climbing or late-night carousing in the red-light district? What if your employer knows you’re being treated for AIDS at a local clinic? The potential is there for inferences to be drawn about you based on knowledge of your whereabouts.

Until recently, location-based services belonged more in the realm of science fiction than to commerce. Although satellite-based Global Positioning System technology has been commercially available for some time for airplanes, boats, cars, and hikers, companies have only recently begun manufacturing GPS chips that can be embedded in wireless communications devices. GPS uses satellite signals to determine geographic coordinates that indicate where the person with the receiving device is situated. Real-life improvements in the technology have come largely from research initiatives by start-up companies in the United States, Canada, and Europe as well as from large companies like IBM, which recently formed a “pervasive computing” division to focus on wireless technologies such as location-based services. Location technology is a natural extension of ebusiness. It’s no surprise that a whole new ecology of small companies has been formed to focus on making it all more precise.

For instance, a Professor helped to create a chip called “Digital Angel” that could be implanted beneath human skin, enabling his company to track the location of a person almost anywhere using a combination of satellites and radio technology. After all, he reasoned, wouldn’t the whereabouts of an Alzheimer’s patient be important to relatives? Wouldn’t the government want to keep track of paroled convicts? Wouldn’t parents want to know where their children are at 10 P.M., 11 P.M., or any hour of the day?. A review of Digital Angel’s commercial potential, though, revealed concern over the possibility of privacy abuses. So that Professor, the chief scientist for a tech company, a company in a place on earth, which makes embedded devices for tracking livestock, altered his plans for Digital Angel, which is about the size of a dime, so that instead of being implanted it could be affixed to a watchband or a belt. Embedding technology in people is too controversial. But that doesn’t mean a system capable of tracking people wherever they go won’t have great value. Although Digital Angel is still in the prototype stage, the company is planning to make it commercially available in 2002.

Digital Angel Sys-Arch
Some companies are even more ambitious. Plans to map every urban area in the world and allow these maps to be retrieved in real time on wireless devices. Yet while businesses around the world seek to improve the quality of location-based services, the biggest impetus behind the advancement of the technology has come from the government, through its effort to improve the precision of locating wireless connections of emergency calls. With the number of wireless users growing, carriers will need to begin equipping either cell-phones or their communications networks with technology that would allow authorities to determine the location of most callers to within 300 feet, compared with current systems that can locate them within about 600 feet.

People are justifiably concerned with the rapidity with which this technology is being deployed. They need to be assured that there is no conspiracy to use this information in an underhanded way.

Sunday, November 9, 2014

MPLS-Based VPN Service

With MPLS, ISPs offer a new and different type of wide area service in their networks. These services are designed to address the performance and security requirements of enterprise customers, particularly VoIP users. Unlike traditional best effort Internet service, MPLS provides a structure whereby an ISP can provide a packet service with performance guarantees for jitter, delay, and packet loss.

MPLS adds two important elements to traditional IP:

1. Virtual Circuit/Label Switched Path (LSP) : Unlike traditional IP that is connectionless, in MPLS all of the packets for a particular session will be routed over a virtual circuit. The MPLS specifications do not call it a virtual circuit (that would make things too obvious), it is called a Label Switched Path (LSP). That LSP provides two basic advantages over traditional IP:
  • Security: Transmissions cannot jump between virtual circuits within the network. As a result, the user should not need to encrypt transmissions. Users with particularly sensitive transmissions like financial information may still choose to encrypt MPLS traffic, though the security features offered in MPLS should be adequate for most enterprise customers.
  • Ordered Delivery: A virtual circuit also ensures that all parts of the message arrive in order. As higher-level protocols (e.g., TCP, RTP) can reorder mis-sequenced packets, this feature has less user impact.
2. Capacity Reservation/QoS : Before MPLS will allow an LSP to be established over a link, it ensures there is sufficient capacity available to meet the requirements of the connection. A carrier cannot ensure performance by simply assigning priorities; all a priority system does is treat some transmissions better than others. Priority does not mean the system treats anyone very well! Ensuring performance requires a capacity reservation mechanism that is one of the key features of MPLS, and it supports multiple service classes and defines different delay and loss parameters for each.

mpls_network_configuration
MPLS Network Configuration
The other basic attribute of MPLS-VPN services is that they provide full mesh connectivity. Unlike earlier frame relay services that require a virtual circuit between any pair of points that will communicate directly, in an MPLS network, any end point can communicate with any other. When the user’s network is initiated, a full mesh of LSPs is created among all end points. The user pays for access at each network location, not for virtual circuits, so a mesh network and a hub-and-spoke configuration have the same cost. Finally, as the MPLS capability is provided within the carrier’s network, it is essentially transparent to the user’s router configuration. All the user does is set the DiffServ Control Points in each packet that will assign each packet to a particular service class (e.g., voice, video, data).

Sunday, October 19, 2014

Virtual Private Clouds Technology

The concept of a virtual private cloud (VPC) has emerged recently as a way of managing information technology resources so that they appear to be operated for a single organization from a logical point of view, but may be built from underlying physical resources that belong to the organization, an external service provider, or a combination of both. Several technologies are essential to the effective implementation of a VPC. Virtual data centers provide the insulation that sets one organization’s virtual resources apart from those of other organizations and from the underlying physical infrastructure.

Virtual applications collect those resources into separately manageable units. Policy-based deployment and policy compliance offer a means of control and verification of the operation of the virtual applications across the virtual data centers. Finally, service management integration bridges across the underlying resources to give an overall, logical and actionable view. These key technologies enable cloud providers to offer organizations the cost and efficiency benefits of cloud computing as well as the operational autonomy and flexibility to which they have been accustomed.

A cloud is a pool of configurable computing resources (servers, networks, storage, etc.). Such a pool may be deployed in several ways :
  • A private cloud operated for a single organization;
  • A community cloud shared by a group of organizations;
  • A public cloud available to arbitrary organizations; or
  • A hybrid cloud that combines two or more clouds.
The full definition of a private cloud by Mell and Grance in 2009 : "the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise". The definition suggests three key questions about a cloud deployment:
  1. Who uses the cloud infrastructure?
  2. Who runs the infrastructure?
  3. Where is the infrastructure?
The distinction among private, community, public, and hybrid clouds is based primarily on the answer to the first question. The second and third questions are implementation options that may apply to more than one deployment model. In particular, a cloud provider may run and/or host the infrastructure in all four cases. Although NIST’s definition does not state so explicitly, there is an implication that the cloud infrastructure refers to physical resources. In other words, the computing resources in a private cloud are physically dedicated to the organization; they are used only (i.e., “solely”) by that organization for a relatively long period of time. In contrast, the computing resources in a public or community cloud are potentially used by multiple organizations over even a short period of time. The physical orientation of the definition motivates the concept of a virtual private cloud, which, following the usual paradigm, gives an appearance of physical separation.

In other words, a VPC offers the function of a private cloud though not necessarily its form. The VPC’s underlying, physical computing resources may be operated for many organizations at the same time. Nevertheless, the virtual resources presented to a given organization – the servers, networks, storage, etc. – will satisfy the same requirements as if they were physically dedicated. The possibility that the underlying physical resources may be run and/or hosted by a combination of the organization and a third party is an important aspect of the definition, as was first articulated by R. Cohen in a May 2008 blog posting (Cohen, 2008) that introduced the VPC concept:
"A VPC is a method for partitioning a public computing utility such as EC2 into quarantined virtual infrastructure. A VPC may encapsulate multiple local and remote resources to appear as a single homogeneous computing environment bridging the ability to securely utilize remote resources as part of a seamless global compute infrastructure".
Subsequent work has focused on a specific implementation profile where the VPC encompasses just the resources from the public cloud.
virtual_private_cloud_implementation
Primary Virtual Private Cloud (VPC) Implementation Profile
Likewise, Amazon describes its virtual private cloud in a January 2010 white paper (Extend Your IT Infrastructure with Amazon Virtual Private Cloud, http://aws.amazon.com/vpc/) as “an isolated portion of the AWS cloud,” again connected to internal resources via a VPN. In both Wood et al. and Amazon, a VPC has the appearance of a private cloud, so meets the more general definition stated above. However, the implementation profile imposes the limitation that the physical resources underlying the VPC are hosted and run by a cloud provider. In other words, the answer to the second and third questions above is “external.” Although internal resources, e.g., the “enterprise site” of Wood et al., are connected to the VPC over the VPN, they are not part of the VPC proper. The primary VPC implementation profile considered one in which the underlying resources are drawn from a public cloud and an internal, private cloud – or, in other words, from a hybrid cloud that combines the two, and How those resources are managed in order to meet organizational IT requirements.

Sunday, September 28, 2014

Cloud Computing: Knowledge Management

Knowledge Management in a cloud computing environment requires a paradigm shift, not just in technology and operational procedures and processes, but also in how providers and consumers of knowledge within the enterprise think about knowledge. The knowledge-as-a-service, “on-demand knowledge management” model provided by the cloud computing environment can enable several important shifts in the way knowledge is created, harvested, represented and consumed.

Collective intelligence is a phenomenon that emerges from the interaction – social, collaborative, competitive – of many individuals. By some estimates there are more than eighty million people worldwide writing web logs (“blogs”). The blogs are typically topic-oriented, and some attract important readership. Authors range from large company CEOs to administrative assistants and young children. When taken together, the cloud computing infrastructure which hosts “blogospheres” is a big social agglomeration providing a kind of collective intelligence. But it is not just blogs that form the collective intelligence – the phenomenon of collective intelligence is nurtured and enhanced by the social and participatory culture of the internet, so all content developed and shared on the internet becomes part of the collective intelligence. The internet then, and the content available there, appears as an omnipresent, omniscient, giant infrastructure – as a new form of knowledge management. This same paradigm applies, albeit on a smaller scale, to the enterprise cloud – the socialisation and participatory culture of today’s internet is mirrored in the microcosm of the enterprise.

Today this represents collaboration of mostly people only, but very soon in the future we may envisage intelligent virtual objects and devices collaborating with people. Indeed, this is already beginning to happen to some extent with internetattached devices starting to proliferate. Thus, rescaling from the actual ~1.2 billion users to tens or even hundreds of billions of real-world objects having a data representation in the virtual world is probably realistic. It is important to note here that content will no longer be located almost solely in a central knowledge repository on a server in the enterprise data centre. Knowledge in the cloud is very much distributed throughout the cloud and not always residing in structured repositories with well-known query mechanisms.
cloud_enterprise_knowledge_management
Architectural View Of Enterprise Knowledge Management
Knowledge management applications offered in the cloud need to be capable of crawling through the various structured and ad-hoc repositories – some perhaps even transient – to find and extract or index knowledge, and that requires those applications be capable of recognising knowledge that might be useful to the enterprise knowledge consumers. Furthermore, we believe that over time multimedia content will become dominant over ordinary text, and that new methods for media-rich knowledge management will need to be devised. Even in the smaller world of the enterprise, a real danger, and a real problem to be solved by knowledge management practitioners, is how to sort the wheat from the chaff – or the knowledge from the data and information – in an environment where the sheer amount of data and information could be overwhelming. The best domain for Enterprise Knowledge Management is in the Enterprise IT domain, as it is a domain under huge cost pressure but one which is essential for strategic development. From a highly abstracted view, the Enterprise Knowledge Management IT domain consists of problem solving, monitoring, tuning and automation, business intelligence & reporting, and decision making tasks.

The tasks of problem solving, monitoring, tuning and automation, business intelligence and reporting, and decision making are the most promising areas for the future deployment of Enterprise Knowledge Clouds. The knowledge available to both IT administrators and automated management agents via the Enterprise Knowledge Cloud will help drive the development of a slew of new technologies addressing the problems which previous computing facilities couldn’t resolve. Currently, the majority of the indicated IT tasks include people, while we suggest that this balance will be changed in the future through automation, ultimately leading to self-managing enterprise IT systems. When mapped into more precise form, this conceptual drawing will evolve into the enterprise-scale knowledge management application stack.

Friday, August 8, 2014

The Future Of Phoning: Enter The iPhone

One of the most famous examples of a revolution in design came with the introduction of Apple’s iPhone in early 2007. This GSM smartphone features a compact size; a highresolution, full-color screen almost as big as the unit itself; a built-in iPod music player; touch-screen functions for navigating menus; and an on-screen keyboard. Unprecedented publicity led to heavy demand. When it went on sale in the United States in June, some stores sold out within hours.

Apple stores sold an estimated 128,000 iPhones on the first day; Apple’s telecommunications partner, AT&T, sold an estimated 78,000 in the same period. Many observers, both inside and outside the mobile phone industry, heralded Apple’s entry into the cellular phone market. Citing the company’s history of innovation and userfriendly features in personal computing and portable music players, critics and consumers thought that the iPhone would be a hit. Certainly its sleek design, uncluttered by buttons and keys, seemed simple and elegant and added to its appeal. The iPhone’s beautiful design, intuitive user’s menu, and touch-screen technology that reacted to finger touches, sweeps, and flicks won raves from both users and critics. It received a variety of awards in 2008, including the prestigious Black Pencil Award for achievement in product design from D&AD, an educational charity that recognizes excellence in design and creativity. However, its hardware left some critics less than impressed. The original 2G model had just 4 gigabytes of memory.

Touch-screen technology had been available on an award-winning model from the company LG in 2006. The Nokia 9300i had a larger screen. The iPhone’s camera offered lower resolution than many other camera phones on the market, and it had no video recording capabilities. Additionally, many SMS enthusiasts and mobile marketing experts felt the touch screen forced the user to SMS using both hands, unlike phones with traditional keypads. Nevertheless, mobile marketing experts acknowledged that it was a hit with consumers. For example, the advanced iPhone 3G, with 8 gigabytes of memory, won the Phone of the Year Award for 2008 from Great Britain’s What Mobile magazine. It has become an extremely familiar model in a field of hundreds of other mobiles, and it serves as an example of how far mobile innovation and design have come in less than forty years.

As a way of measuring these advances, compare the first commercially available model, Motorola’s DynaTAC 8000X, to the iPhone. The DynaTAC weighed close to 2 pounds (907g), and measured 13 inches tall by 1.75 inches wide by 3.5 inches deep (33cm by 4.4cm by 8.6cm). The iPhone 3G weighs 4.7 ounces (133g), and measures 4.5 inches by 2.4 inches by 0.46 inches deep (11.4cm by 6.0cm by 1.2cm). The DynaTAC cost close to $4,000 in 1983; the iPhone today costs as little as $99.

Competition Leads to Variety
Consumers do not need to buy an iPhone to take advantage of advances in design and power. Complex chip sets and advanced electronics offer amazing speed, full-color graphics, and light weight in even inexpensive mobile phones. For example, a cell phone that debuted in October 2008 had a slide-out QWERTY keyboard, expandable memory up to 16 gigabytes, stereo sound, and a camera with the ability to capture images with a 2-megapixel resolution. The phone’s suggested retail price was $50.

The competition among cellular phone manufacturers and network providers remains fierce. Within a year of the release of the iPhone 3G, several manufacturers offered phones with similar capabilities for a lower price than Apple’s device, or with more advanced features for a similar price. For example, the Samsung Omnia offers a 5-megapixel camera for roughly the same price as the iPhone which has a 2-megapixel unit. As consumers navigate the maze of competing devices, features, and networks, cellular companies are working to develop the next generations of connectivity. The changes from 1G devices and networks to 2G digitization were clear-cut; a device and a network were either digital or not. However, the advances beyond 2G are murkier waters.

Friday, June 6, 2014

Layered Multicast Of Scalable Media : Introduction

With the fast deployment of Internet infrastructure, wired or wireless, the IP network is getting more and more heterogeneous. The heterogeneity of receivers under an IP multicast session significantly complicates the problem of effective data transmission. A major problem in IP multicast is the sending rate chosen by the sender. If the transmitted multimedia data rate is too high, this may cause packet loss or even a congestion collapse whereas a low transmission data rate will leave some receivers underutilized.
heterogeneity_of_the_receivers_under_an_IP_multicast_session
Heterogeneity of the receivers under an IP multicast session.
This problem has been studied for many years and is still an active research area in IP multicast. To solve this issue, the transmission source should have a scalable rate, i.e., multirate, which allows transmission in a layered fashion. By using multirate, slow receivers can receive data at a slow rate while fast receivers can receive data at a fast rate. In general, multirate congestion control can perform well for a large multicast group with a large number of diverse receivers. This brings us to the scheme of layered multicast.

Basically, layered multicast is based on a layered transmission scheme. In a layered transmission scheme, data is distributed across a number of layers which can be incrementally combined, thus providing progressive refinement. The scalable video coding (SVC) can easily provide such layered refinement. Thus, the idea of layered multicast is to encode the source data into a number of layers. Each layer is disseminated as a separate multicast group, and receivers decide to join or leave a group on the basis of the network condition. It is assumed that the data that is going to be transmitted can be distributed into l multicast groups with bandwidths Li, i=0, . . . ,l-1 . Receivers can adjust the transmission rates by using the cumulative layered transmission scheme. Now the adaptation to heterogeneous requirements becomes possible because it can be done independently in each receiver. On the basis of the network condition, a particular receiver can subscribe a bandwidth Bi by joining the L0, L1, . . . , Li layers:
bandwidth_formula_on_layered_multicast
Bandwidth formula on layered multicast group.
The more layers the receiver joins, the better quality it gets. As a consequence of this approach, different receivers within a session can receive data at different rates. Also, the sender does not need to take part in congestion control.
group_of_layered_multicast
Layered multicast group.
To avoid congestion, end systems are expected to be cooperative by reacting to congestion and adapting their transmission rates properly and promptly. The majority traffic in the Internet is best-effort traffic. The transport control protocol (TCP) traffic uses an additive-increase multiplicative-decrease (AIMD) mechanism, in which the sending rate is controlled by a congestion window. The congestion window is halved for every window of data containing a packet drop and increased by roughly one packet per window of data otherwise. Similarly, IP multicast for UDP traffic needs a congestion control algorithm. However, IP multicast cannot simply adopt the TCP congestion control algorithm because acknowledgements can cause an “implosion problem” in IP multicast. Owing to the use of different congestion control algorithms in TCP and multicast, the network bandwidth may not be shared fairly between the competing TCP and multicast flows. Lack of an effective and “TCP friendly” congestion control is the main barrier for the wide-ranging deployment of multicast applications.

“Scalability” refers to the behavior of the protocol in relation to the number of receivers and network paths, their heterogeneity, and their ability to accommodate dynamically variable sets of receivers. The IP multicasting model provided by RFC-1112 is largely scalable, as a sender can send data to a nearly unlimited number of receivers. Therefore, layered multicast congestion control mechanisms should be designed carefully to avoid scalability degradation.