Ads by Bidvertiser

Sponsored Links

Ads by Bid

Friday, February 13, 2009

Online job-simple-Get paid to post ads

Best project no scam a Genuine Project which gives regular payment and you can work from anywhere you want home or office, where there is Internet .

It is an on-line ad posting job which fetches income. You have to place ads in the relevant website which should get approved by the classified website.

If you want training about the job, I can provide you(it requires a few hours with fees, online training)

Features of the Job:

1) Very easy project
2) Six months validity
3) Receive payment every month
4) Copy and paste job
5) No time limit (You can do at any time U want)
6) Live Reports

DETAILS OF REGISTRATION FOR THE COMPANY (your choice):
TypeReg.Fee (Rs)Amount Per AdNo.of Ads Placing per dayEarning Per DayTotal Monthly Earnings
I2000.50UNLIMITEDUNLIMITEDUNLIMITED
II3501UNLIMITEDUNLIMITEDUNLIMITED
III8502UNLIMITEDUNLIMITEDUNLIMITED
IV11003UNLIMITEDUNLIMITED
UNLIMITED


Duration of work : Six months.

Referral commission : 5% from their daily earnings.

Visibility of Income:

In this project, you have to choose from the above 4 types, by which you will be able to earn un-limited income.
Suppose if your choosing the type II (Rs. 700/-) you will be paid Rs.2/- per approved Ad which you can be able to see from your official mail id, given by the company after the registration process gets done. This Official ID is common to you and to the company.

In 1 hour, you can be able to post nearly 50 ads (i.e.) you gain Rs. 2/- x 50 = Rs. 100/- in an hour. If you work of 2 hrs in a day, you will get 100 x Rs. 2/- = Rs. 200/- and the job validity is for 6 months.

In a week, you can earn Rs. 200 x 6 days = Rs. 1200/- (for all approved ads).
One important thing is that you should reach the minimum payout of Rs. 100/- to receive a cheque .

The payment, will be done by cheque every month, to your house address.

If you are really interested to join contact me ,

vignesh.kaif@gmail.com

or

call

+91 9003170363

US, Russian Satellites Collide, NASA Says

Iridium, a commercial U.S. communications satellite, and Cosmos, a Russian communications satellite, collided 790 kilometers above northern Siberia, NASA said yesterday.

The Wall Street Journal said the crash created two debris clouds that pose a safety risk to other satellites and the International Space Station, which flies in an orbit below the impact height.

There have been four collisions of objects in space, but this is the first satellite collision. Iridium weighed about 560 kilograms and Cosmos 950 kilograms. NASA said Iridium was operational but Cosmos had been nonoperational for several years.

Since the launch of Sputnik I by the Soviet Union in 1957, 3,000 of 6,000 launched satellites have remained in operation. NASA said roughly 17,000 pieces of space debris larger than 10 centimeters exist.

Courtesy:donga.com

Monday, February 9, 2009

Colours, computers and colour management

Colour is something that most of us take for granted, but for manufacturers, lack of colour control can be a serious economic problem. Visible inconsistency in paints, inks and dyes costs money. For example, a car manufacturer couldn’t sell cars if a particular paint colour varied from one car to the next. For product advertising, manufacturers need be able accurately to represent a product’s colour in print, on the Internet, in film and on television. For this reason, as soon as it was possible for industries to produce a wide range of colour pigments, our perception of colour, and the measurement and control of colour, became the subject of a great deal of study.

At home and at work even computer users who aren’t colour professionals are finding that they need to manipulate colour images. Intentional colour management provides the means both to control and reproduce colours as accurately as possible.

To apply colour management effectively you need to know the colour theory behind it. This article introduces some of the ideas of colour theory and shows how it is used in digital colour management.

A Brief History Of Colour
Humans have been using coloured pigments for thousands of years, although for much of this time only a limited range of colours were available. The use of oils blended with new pigments by Jan van Eyck, best known for his 1434 painting of “Giovanni Arnolfini and his wife”, revolutionised painting. However it is only since the eighteenth century that a really wide range of low cost colour pigments have been available. The Industrial Revolution saw the invention of aniline dies from coal products in the 1800s and the first colour photograph was made in 1861.
The earliest colour space, still in use, the CIE XYZ space, wasn’t published until 1931 (colour spaces will be described later). Apple incorporated basic colour management into its operating system quite early on and was instrumental in establishing the International Color Consortium, or ICC (www.color.org) in 1993. Apple’s early support of colour control helped to establish its products in the field of art and design, and area in which the company has had considerable success. Microsoft Windows did not provide any colour management facility until Windows 98, and colour management tools have only recently appeared in Linux.
One of the problems in understanding colour management is that there are numerous standards and methodologies, each a product of the time it was introduced and the particular colour problems it was intended to address. For example, instead of just one colour model or space there are several, which can be confusing to the colour management novice. Another problem is that colour and colour management is a complicated subject and a great deal of the available information is either misleading or just plain wrong.

Colour Does Not Exist
Strictly speaking colour is a product of the human visual system. All that can physically be measured are the wavelength and intensity of light and not its colour. Certain mixtures of wavelengths can only be referred to as purple or brown because of a general agreement that that is what we call the visual sensations they produce. Since colour is a subjective sensation, colours are not experienced in the same way by everyone. You might think this rather difficult to prove, since we cannot directly experience the thoughts and sensations of others, but about two per cent of females and eight per cent of males have some degree of colour blindness. The most common form of colour blindness is rather misleadingly referred to as red-green colour blindness.

A great deal of the effort in colour science has gone in to methods of relating physical measurement to the human perception of colour. Unfortunately this is where most of the confusion over colour control and management arises. For one thing our visual perception system is not linear, so colour systems based directly on linear measurements (like the CIE XYZ 1931) do not map well to our experience of colour. For example there is quite a large portion of the green area of the CIE XYZ space that is perceived by humans as being all the same colour.

Comparing Senses – Hearing To Sight
There have been attempts to draw parallels between music and colour and to produce a system to organise colours, in the same way that musical theory provides structures such as scales and harmony for musical composition. However the human auditory and visual systems are rather different. The average human can hear a range of over 9 octaves and the common musical instrument with the widest range, the concert grand piano has a range of 8 octaves. Most human voices can cover 2 to 3 octaves. Each musical octave is a doubling of frequency. If you apply the same idea to the range of frequency of visible light you will see that there is a span of slightly less than one “octave”. Visible wavelengths range from the short 380 nm (nanometre) for violet to 750 nm for the long wavelength red. Generally we speak of only seven pure colours in the visible spectrum – violet, indigo, blue, green, yellow orange and red. However there has been some success with ideas of colour harmony and various colour wheels are used by artists and designers to choose colours that go well together. Colours used for a particular design project are often organised in a palette. Colour palettes are commonly found in design software such as
CorelDRAW.

Unlike our hearing, which can sense and identify sounds of a single wavelength, our eyes have three sets of sensors, known as cone cells, each of which senses light over a range of frequencies. The maximum sensitivity of one of these sets of sensors peaks in the blue, one in the green and one in the red. Outside their peak the sensitivity of each group of sensors falls off smoothly and the green sensors overlap with the blue and the red. Colours that fall outside the peak sensitivity of any of the sensors are recognised because of the balance of sensations they produce between two sensors. For example yellow stimulates both the green and the red sensors. Part of the reason our colour vision works so apparently well with just three sensors is because of the fairly limited human visual range. Our eyes also have rod cells, which respond only to light intensity.
Colour displays and printers rely on the way our eyes work and create a wide range of colours by mixing only three colours together.

Emitted, Reflected – Additive, Subtractive
Humans experience colour in two ways: by emitted light directly entering the eye or, by light reflecting from the objects around us. Mostly we experience colour through reflection and this means that the appearance of colour depends on the the nature of the surface of the illuminated object and on the spectral content of the light that is illuminating it. For example a sheet of “white” paper appears red when red light is shone on it and green when illuminated with green light. A sheet of red paper illuminated with green light appears black. This means that when working with colour and colour management the nature of the light that was and/or will be used as the illuminant must be taken into account.

Primary And Secondary - RGB Or CMY / CMYK
All the colour imaging devices we use that emit light – computer monitors, televisions and projectors use a mixture of red, green and blue light to produce a wide range of perceived colours. These devices use what’s called additive colour mixing. Red, green and blue are referred to as primary colours because they relate directly to the three groups of sensors in our eyes. On the other hand, printers use cyan, magenta and yellow and so-called subtractive colour mixing. These three colours are referred to as secondary or complementary colours because cyan is white minus all the red frequencies, magenta is white minus all the green frequencies and yellow is white minus all the Blue frequencies. Or to put it another way, the three secondaries can be obtained by mixing pairs of primaries as follows; mixing blue and green light produces cyan, mixing red and blue gives you magenta and mixing red and green makes yellow.

Why Are There Two Methods Of Producing Colour?
The reason that we use two methods of producing colour is this: for displays, the base state is black, so emitted light has to be added to that base state to produce colours; for printers the base state is the “white” of the paper and colours have to be selectively subtracted from that base state, leaving the required colour reflected from the page. This is why the paper used has such an effect on the final image quality and why using the more expensive, “whiter” papers results in a better image (unlike many paints, most printer inks are transparent to some degree). The expensive photo papers also have smoother surfaces and may have a gloss surface to reduce diffusion of the light striking the surface, resulting in greater image contrast. The variation in paper “white-ness” and the other variations in the nature of paper are the reason why printer drivers and colour managed applications have a choice in their output settings for paper type.
The pigments and dyes used for printing inks and the process of printing on paper aren’t as successful as the three colour system used for displays. Printing cyan, magenta and yellow inks to get black usually results in a dark brown, so all ink colour printing uses an additional pigment – black - to form the CMYK system of colour printing (the letter K from the end of the word black is used to avoid confusion with blue). Photo printers extend the range of printable colours still further by adding more ink colours. Usually these are lighter versions of cyan, magenta and yellow.

White And Colour Temperature
In abstract, pure white might be regarded as emitted or reflected light that contains equal amplitudes of all the visible frequencies. Such pure white light never occurs in nature or in colour management. Our main source of illumination, sunlight, does not have a smooth or equal energy distribution and its frequency content varies depending on the time of day and on weather conditions. Artificial light is even less “pure”; for example, fluorescent light has a very uneven energy distribution with a number of large amplitude spikes at certain frequencies.
In practice white is always variable and relative and the human visual system is extremely good at adjusting, so that, under a wide range of lighting conditions, the areas of view that emit or reflect the widest range of frequencies with similar strength are usually seen as “white”. Colour film cameras, lacking any built in compensation, don’t do this and the best that can be done is to use film optimised for different lighting conditions such as “daylight” or “tungsten” film and/or to supplement these with coloured filters fitted to the lens. Perhaps one of the biggest advantages of digital cameras is that they have either, or both, auto and manual white balance. Even if the colour balance out of the camera does not look right it’s easily tweaked with photo editing software, which frequently has an auto-colour or auto-white balance control.

The variations in what may be considered white mean that this has to be taken account of in colour management. Colour temperature is often used as a reference for white. It’s based on the idea that a perfect black body radiates light according to its temperature. At zero Kelvin a black body radiates nothing, at 5,000 K it radiates yellow-white light rather like morning daylight, at 6,500 K it radiates blueish-white similar to overcast daylight at noon and at 9,300 K it radiates a hard blue-white light. D50 and D65 (for Daylight 5000 and 6500) are commonly used as lighting references. 9,300 K was frequently used as a default monitor setting because monitors were most efficient at this setting and produced high brightness.

Representing Colour Using Colour Spaces
One of the most confusing things you may ever see in a text book on colour is the 2D representation of the 1931 CIE XYZ colour chart for RGB displays. It is rarely explained that this colour chart represents a plan view or projection from above the white point of a 3D volume and is shown this way purely for convenience, given the difficulty of representing 3D objects in print. All colour spaces are three dimensional volumes. The vertical axis of these spaces always represents luminance or brightness and runs from black at the bottom, through various shades of grey, to white at the top.
The most commonly used reference space today is the 1976 CIE L*a*b* (pronounced as elstar ehstar, beestar). Rather confusingly this is often referred to as Lab but there is an earlier space, the 1948 Hunter Lab space, to which that nomenclature more properly belongs. L*, standing for Luminance is the vertical axis and the orthogonal axes a* and b* are red-to-green and blue-to-yellow. L*a*b* is often used as a reference colour space for performing transformations from one colour space to another because, at present, it is the space that comes close to perceptual linearity. For example the popular Adobe applications such as Adobe Photoshop use a version of L*a*b* as their internal colour model.

Saturation, Hues, Tints And Shades
A special vocabulary is used for describing colour. Unfortunately these terms are often misunderstood or misused and in some cases are hard to define. They are saturation, hue, shade and tint.

The strongest, purest colour is said to be saturated. For displayed digital images fully saturated colours are represented by the largest numbers, or 255 for a 24-bit image (32-bit colour usually uses the extra 8 bits to represent other image attributes such as transparency). Very pure saturated colours are quite rare in nature because most naturally occurring colours are mixtures of more than a single wavelength. Artificially produced, single wavelength colours, are fully saturated.

Colour hue is the attribute that describes what appears to be the colours dominant wavelength. For example the hue for pink would be red.
For mixing opaque pigments, as in paint, tints are achieved by mixing a colour with white, while shades are achieved by mixing black with a colour. For example light pink is red mixed with white, while dark pink is black mixed with red. For colour produced by emitted light, such as from a display screen it is always a question of mixing the colour with white because the base state of all displays is black. For displays, the shade attribute of a colour can be regarded as brightness or luminance.

Colour Gamuts
Every colour device has a gamut, a range of colours it can capture or reproduce. This gamut can be represented as a device dependant colour space within the framework of any of the reference colour spaces. The term gamut is used to refer to the relative volume of the colour space. It is often also used to refer to the area of the 2D projection of a colour space, since this is related to the volume and represents the range of saturated colours. The human visual system - for most people - has a large gamut. Frequently this is referred to as containing 65 million colours. The human visual system is not good at recognising absolute colours, but is much better at recognising the difference between discrete patches of similar shades. The figure of 65 million recognisable shades has been determined by experiment as the average colour discrimination for most people.

Colour Intent
Colour intents are one of the most difficult aspects of colour management to understand. Users of colour management systems are required to choose which intent to use. Intents are needed because none of the current elements in a colour capture, manipulation and reproduction chain can match the gamut of the human visual system. All capture devices, displays and printers have a smaller gamut. Colour printers usually have the smallest gamut, or device colour space, of all the colour devices we use. Some of the colours we can see cannot be captured, and some of the colours that a camera or scanner can capture, cannot be reproduced. This leaves us with the problem with what to do with the colours we have captured in an image but cannot reproduce in print and here is where the choice of intent comes in.

A colour intent describes how any colours that fall outside the gamut of a reproduction device, normally a printer, are rendered. There are four intents in common use; perceptual, relative colorimetric, absolute colorimetric and saturation.

Perceptual rendering compresses the source gamut so all the out-of-gamut colours fit inside the destination gamut. This can result in some distortion of the relationships between colours. With perceptual rendering none of the original colour information is lost and in theory the transformation could be reversed in order to restore the original image.

Relative colorimetric rendering maintains a near exact relationship between in-gamut colours but may clip colours that are out of gamut. Clipped colours are lost and cannot be recovered.

Perceptual or relative colorimetric intents are the best choices for photo realistic images.

The absolute colorimetric intent is similar to relative, in that it clips out-of-gamut colours and preserves those that are in gamut, but it treats the white point differently. With absolute the white point does not change, while with relative it may, if this is required to maintain the relationship between colours. This means that with a relative conversion the appearance of whites in an image may change, they may get slightly more red or blue, while with absolute they won’t.
Absolute rendering is used when it is important that the colours that are reproduced remain as accurate as possible.

Saturation rendering tries to preserve the saturation of colours while making no attempt to reproduce photo realistic images. It is a good choice for business graphics such as charts.

The Colour Management Workflow
A colour management system consists of a number of hardware devices, each with their own gamut or colour space, with an image in the form of digital data being passed from one device to another. As the data moves between each device it often needs to be translated from one colour space to another. To do this it is necessary to know the source and destination colour spaces used, the devices must be calibrated and a profile that describes the devices’ colour gamut must be used.

A typical colour system chain consists of a camera or scanner passing data to a computer running a colour management application. Edited colour images from the colour management application are output to a web page, a local colour printer or sent to a commercial colour press. A calibrated and profiled colour display is used to view the colour images at various stages in the process.
A properly calibrated and profiled colour display is absolutely vital to establishing colour control because it is the user’s window on the entire process. These must be calibrated and profiled using a hardware monitor calibrator such as the ColorVision Spyder 2. Older displays that have been in use for several years are often unsuitable because they can no longer generate the peak brightness that is required for accurate calibration. Although LCD monitors are now the usual display type many of them are unsuitable for serious colour work, either because they lack the necessary user controls for calibration, or are limited to only 18-bit colour rather than 24-bit.

Colour And Computers
The majority of computer users expect to be able to use a digital camera or a scanner to capture an external scene, to edit and manipulate those images and then to print or display them on-screen without any unexpected changes in colour balance. Most people would probably say that they would expect a reproduced image on screen or printed on paper to “look the same” as the original scene. It would be very bad for business if, when operated by the average user, computers displayed or printed colour images where the colour balance was obviously wrong. In the early days of colour computing this is frequently exactly what did happen.

This problem has been solved by building automatic colour management into the operating system and into printer drivers. It works by using a colour space that has a relatively small gamut for reference and by assuming that, by default, everything in the colour work flow conforms to that colour space. The space used, sRGB (standard Red Green Blue) was originally adopted by Microsoft and HP for standardising colour on web pages and is based on the colour space of the typical computer monitor (at the time, this was CRT). Therefore most monitors conform reasonably well to sRGB without any calibration or profile. Many digital cameras, especially the consumer models, use sRGB as their default colour space, although it may be possible to select other colour spaces.
Instead of individual colour profiles measured from the actual devices being used, manufacturers’ generic profiles are used. Generic profiles, which allow for a certain amount of manufacturing variation, are usually included with most colour devices, or are available from websites.

When it comes to printing, all printer manufacturers have a selection of presets included in their printer drivers designed for a range of papers, although these are of course always designed for use with the printer manufacturer’s own brand of papers. These profiles are often supplemented by settings for things like “best quality photo” or “text and image” which are effectively simple ways of setting the colour intent.

Although automatic sRGB works reasonably well and is a lot cheaper than a fully colour managed system, intentional colour management allows more accurate control of colour and can provide better results.

The Colour Management Chain – Open Or Closed Loop
A colour management work flow can be regarded as a closed loop when the characteristics of the final reproduction device can be measured and fed back into the work flow. If the final colour output is to be a commercial printing press the ideal would be to obtain an individual device profile for the press, inks and paper used and to use that to create the output file. With such a profile it is possible to close the management loop. Commercial print press operators in general have always proved to be a huge stumbling block in this respect. It is normally extremely difficult to establish a meaningful dialogue with commercial printers on the subject of colour in general or on press profiling.

Most printers will supply hard copy colour proofs on request and for a fee. The problem with these is that usually they are not produced using the same press characteristics, inks and paper used for the final print run, so they are only useful for showing really gross errors.

What is possible is to establish to which standards a commercial printer says they calibrate their presses, and what type of paper they will use. This is likely to be one of the commercial standard profiles which are supplied with leading image editing software. Adobe Photoshop for example has profiles for SWOP (Standard Web Offset Press), Euroscale coated V2 and so on.

In Conclusion
In essence, colour management simply consist in choosing and applying the correct colour spaces and providing accurate device profiles at every step of the colour management work flow. In the past, poor software implementation, lack of accurate low cost hardware and software tools and poor support for colour management within the operating system, have made it hard for the average person to establish a colour management work flow that actually works.
Fortunately over the years the situation has improved and software and hardware colour management systems that leverage the latest advances in technology, such as high intensity LED’s, instead of exotic gas lamps, are becoming available. It is now possible to set up a calibrated and profiled colour work flow on an ordinary desktop PC at reasonable cost.

HAPPY BIRTHDAY i@mhErO!


Sunday, February 8, 2009

Altec Lansing inmotion iM600


iPods On The Move

Altec Lansing’s inMotion iM600 is an iPod dock that doubles as a radio and speaker for your PC or any other MP3 player.

The dock itself is stylish and compact, and is a small rectangular block but can be unfolded on a table. All the iPod models can be docked, including the latest iPod Touch. You can charge your iPod or synchronise your music as well.

There are controls for volume adjustment at the base of the dock and the track switching buttons on the top. The volume control buttons are hard and tacky, whereas the smaller track controlling buttons are very soft and tiny. There does not seem to be any button for pausing or stopping tracks. Other than that, the dock is simple to use and comes with an inbuilt display on the front, which shows the source of the track and volume control. There’s also a compact little remote control, like those found on many of their desktop speakers sets.

The iM600 has good music quality, but it lacks bass in the tracks due to its flat design. It is loud though, and there is little or no distortion with the volume set to maximum. The SFX button doesn’t really do a whole lot other than making the sound slightly better.

The dock comes with an auxiliary connector, which means you can connect any other iPod MP3 player to it, or even your computer for that matter. You also can use it as an FM radio, and the antenna comes well tucked into the back of the dock. One of the biggest plus points of this product is the inbuilt chargeable Li-ion battery that charges itself when powered on. This dock is then mobile but docking an iPod in a moving vehicle might not be a good idea—the base plug could damage itself or the iPod connector from the vibrations.

The Altec Lansing iM600 is priced at Rs 7,500. It’s a tad costlier than we’d like it to be, but the functionality, flexibility and performance that it offers is hard to come by in such a product, especially in the Indian market.

Panasonic unveils 1-inch thick HDTV



Panasonic won the day with its unveiling of the Z1 Viera TV. This super thin set — thickness of a mere inch — is also a wireless TV. What this means is that the TV has no inputs. Instead it comes with a set-top box which has the requisite ports and connects with the Z1 Viera wirelessly, with no loss or degradation in quality. The box can then be placed anywhere in the room. And yes, it does all this in full HD 1080p glory.

Saturday, February 7, 2009

Raju brothers treatment in jail

From having to share their cell and toilet with other prisoners like bootleggers and sleeping on the ground, disgraced Satyam founder B. Ramalinga Raju and his brother B. Rama Raju are now entitled to benefits like a special room, cots, pillows, a separate toilet and kitchen.
Ever since they were sent to Chanchalguda Central Jail here on January 10 for the massive Rs 7000 crore fraud, the Raju brothers were being treated as ordinary prisoners.
They will now enjoy special class prisoner status following an order by a city court on Friday on a plea moved on behalf of the accused.
The Rajus will now get a special cell, cots, pillows, mattresses, sheets, mosquito nets, a separate kitchen, toilet, newspapers, pen, writing pads and television. They can also have home food.

The disgraced founder and former chairman of Satyam Computer Services Ramalinga Raju and former managing director Rama Raju were being treated as ordinary prisoners ever since their arrest on January 9 on charges of cheating, criminal conspiracy, forgery and falsification of records.
The court passed the orders on Friday on the basis of an inquiry conducted by Hyderabad district collector, who certified that Rajus were used to a certain lifestyle.
It was exactly a month ago (January 7) that Ramalinga Raju quit as Satyam chairman while admitting to the Rs 7000 crore fraud, the biggest in India's corporate history.
Sixth additional chief metropolitan magistrate D. Ramakrishna had reserved orders on the special status petition on January 16. The counsel for Rajus had sought the special treatment under rule 730 of the Andhra Pradesh Prisons Rules, saying they had stature in society and were used to a certain lifestyle.
The prosecution, however, opposed the same and remarked that those who made money at the cost of the poor didn't deserve such privileges.
Having committed such a "monumental fraud", the Rajus do not enjoy any status as such and the question of a special status in jail does not arise, additional public prosecutor Ajay Kumar had contended.

Rs.600 cr Bank loan for satyam

The cash-strapped Satyam Computer Services would borrow Rs 600 crore ($130 million) from banks to meet its working capital requirements, the company confirmed on Thursday after a two-day board meeting here.
“This funding, along with healthy collections, is expected to help the company tide over its financial challenges,” the IT bellwether said in a statement, but did not name the banks, which had sanctioned the funds.
Satyam also reaffirmed that the January salaries for its global employees and February salaries (fortnightly) for its US-based staff have been paid from internal accruals.
“Completing the complex financial restatement exercise, including announcement of third quarter results and ensuring prudent financial operations will be the primary focus in the next few weeks,” Partho Datta, who has been appointed as one of the two special advisors to the board, said.

Datta, a veteran chartered accountant, will be overseeing the financial operations of the company.

Former Tata Chemicals managing director Homi Khusrokhan is the other special advisor appointed to assist the six-member Satyam board.

The board also appointed A.S. Murty, a Satyam veteran, as the new chief executive of the IT bellwether with immediate effect.

Murty was Satyam's chief delivery officer, responsible for delivery excellence and leadership development.

“Murty is a Satyam veteran of 15 years, who has been in its forefront since January 1994. He brings to play a deep understanding of the organisation, proven expertise in leading a business unit, overseeing global delivery, nurturing customer relationships and spearheading the entire gamut of the human resources function,” board member Deepak Parekh said in a statement.

C. Achuthan, former presiding officer of the Securities Appellate Tribunal and a Satyam board member, chaired the two-day meeting.

Friday, February 6, 2009

Google Version 3.0



Yes you saw it right.Google Maps for Mobile, the native S60 version, just hit v3.00 an hour ago.It looks awesome and very user friendly as always. Several new features and minor improvements in the user interface have been provided. The main change is the addition of Google Latitude, a way of finding your friends on the map. To grab your own copy of v3.00, go to m.google.com and click on ‘More’ and then ‘Maps’.

Go download and try it out. Thank you google.

Some Screen shots for You as above.

Key to energy in Uranium

Theoretically, India should set up 40,000-mega watt reactors by 2020 to meet its energy requirements and become energy-independent by 2050, Atomic Energy Commission Chairman Anil Kakodkar said on Thursday.

Even after the most optimum use of energy resources including the available thorium and uranium reserves, “we will face an energy deficit of 400 giga watts by 2050.” One giga watt is 1,000 MW.

Dr. Kakodkar was addressing an interactive session on “Perspectives on Evolving Nuclear Power Programme” organised by the Calcutta Chamber of Commerce here.

Energy Value

Emphasising the importance of uranium import, he said: “Importing uranium has specific advantages … spent uranium has much larger energy value … it will be the source of energy that will multiply to breach the energy deficit gap in the coming years.”

Importing uranium under the international civil nuclear programme would not compromise the country’s autonomy in nuclear programmes.

“The international civil nuclear programme will be pursued without any compromise on domestic autonomy and on the pursuit of usage of nuclear energy for whatever purpose.”

Dr. Kakodkar also laid stress on the importance of thorium in the three-stage nuclear programme being pursued by the country’s scientists.

“We have one of the largest thorium resources in the world that can be used in the three-stage programme … a 300-MW thorium reactor will come up very soon … the objective of the programme is to reach a stage when the country can make full use of its thorium resources.”

Admitting that there were delays in operationalising uranium mines, Dr. Kakodkar said exploration was on and new mines were expected to start functioning soon in Karnataka and Meghalaya.

Potable water from air

HYDERABAD: They are now making water from thin air, literally.

Jalimudi village in East Godavari district will get 1,000 litres of potable water every day, produced from air. The water station, the first of its kind set up in any village in India, has begun trial runs.

“The water… has been sent for quality tests and it’s been certified fit for consumption,” Meher Bhandara, director of WaterMaker (India) which manufactures the units, told The Hindu over the phone from Mumbai.

Projected as a boon to people in rural areas, especially in the coastal regions, the system runs on electricity and uses the refrigeration technique to condense water from air. Blower driven air is made to pass through filters and a refrigerant is circulated leading to condensation of water, which is collected in a holding tank.

The efficacy will depend on the relative humidity and temperature. “Each machine will produce water if the humidity is between 70 and 75 per cent and the temperature between 25 C to 32 C,” Ms. Bhandara said. If these are high, there will be more water.

The machine, which costs about Rs. 3 lakh, was installed in Jalimudi by the company free of cost.

Galactic Spectacle


GALACTIC SPECTACLE: The spiral galaxy NGC-4921 with a backdrop of more distant galaxies in an image from the Hubble Space Telescope, released on Thursday. It was created from 80 different pictures using yellow and near-infrared filters.


Thursday, February 5, 2009

Pak investigators find Bangladesh link to Mumbai attacks

Islamabad (PTI): Pakistan's probe into the Mumbai attacks is likely to indicate that the incident was the handiwork of a network of Muslim fundamentalist groups in South Asia as investigators have found evidence of a Bangladeshi connection, according to a media report.

The report on Pakistan's investigation is likely to indicate that the attacks were carried out by "an international network of Muslim fundamentalists present in South Asia and spread all the way to Middle East" while making a case for regional anti-terror cooperation, the influential Dawn newspaper today quoted its sources as saying.

The daily said Pakistani sleuths were "closing in on a Bangladeshi connection" to the attacks and had "evidence of not only the involvement of a banned militant organisation, Harkat-ul-Jihad-al Islami of Bangladesh, but also of its role in planning the attack and training the terrorists".

A reference to this is likely to be made in the report on Pakistan's investigation, the daily said. It is widely expected that the report will be made public and shared with India later this week.

The Pakistani investigators were also trying to ascertain "if at least one of the Mumbai attackers was of Bangladeshi origin", the newspaper said.

Diplomatic and other sources told PTI that the Pakistani security establishment and the senior-most American diplomats here had been referring to a possible Bangladeshi connection to the Mumbai attacks in the past few days.

Both Pakistani security officials and US diplomats have also been making a case for "larger regional cooperation", the sources said.

Though the contents of the Pakistani report are a tightly guarded secret, the Dawn quoted sources privy to its contents as saying that it would "emphasise that the Mumbai incident is not strictly a Pakistan-India issue".

Pakistan's High Commissioner to Britain Wajid Shamsul Hassan had said in a recent interview with an Indian TV channel that investigations had revealed the attacks were not planned inside Pakistan.

His remarks were described by Prime Minister Yousuf Raza Gilani and Foreign Minister Shah Mahmood Qureshi as "hasty".

Besides the Bangladeshi connection, there were "clear indications that some of the planning for the attacks was done in Dubai and there is also an element of local Indian support", the Dawn reported.

"Investigators believe it would have been almost impossible to plan and execute an attack of this proportion and sophistication without the local Indian support – a fact India is shying away from," it reported.

Wednesday, February 4, 2009

How Microsoft Outlook ruined a birthday party

If you like to order a cake at Wegman's bakery, you can simply email them a personalized message that will be printed on the cake.

A lady in NY followed the same procedure and ordered a birthday cake over email but here’s what they delivered on her birthday - a cake with some HTML icing.


It turned out that she used Microsoft Outlook to send her email but Wegman’s email system failed to recognize the proprietary HTML tags of Outlook and hence this goof-up.

This is best explained by an employee of Wegman - "we just cut and paste from the email to the program we use for printing the edible images, we are usually in such a hurry that we really don’t have time to check. and if we do the customers yell at us for bothering them."

Top 10 upcoming technologies

From a few favourite songs on magnetic tape on a Walkman, to wireless, portable MP4 movies; from beepers to cell phones; from SLRs to camera phones — in just a couple of decades, science has taken us beyond the predictions of futurology and into the realms of Asimov and Arthur C Clarke. In such an intensely fluxed technological environment, we’ve become so used to witnessing Spidey-like jumps in technology during our lifetimes that even touch screens are beginning to seem old hat. We at Digit share your impatience and so we decided to satisfy our curiosity — and yours — by ferreting out and laying before you ten of the most remarkable technologies being worked on today, which are set to bring sci-fi to reality.

1. CUBIC CHIPS: Laying It On Thick
In 1965, Intel’s Gordon Moore stated what has come to be known as Moore’s Law — that the number of transistors on a chip will double about every two years (how many times have you heard that one before?). But as chips get smaller, engineers are already facing problems in trying to cram innumerable transistors into decreasing space.

The Rochester Chip
Enter the Rochester chip – a chip that’s been designed vertically, bottom up, specifically to maximise the main functions of a chip by the use of several layers of processors. However this ‘3D’ chip is unlike the ‘stacking’ idea – where present-day chips are merely stacked one on top of another. This one is built so that each layer interacts with each other layer as a part of a single circuit, while performing different functions. Chips for audio, for example, differ in requirements from chips that process digital photos or texts. The Rochester chip is designed simultaneously to deal with the different speeds and power requirements of these processes.

The design of this cubic chip (not to be confused with the Power Mac Apple G4 cube, which was a computer in itself) is purportedly the first to integrate each layer in such an optimally seamless and efficient manner. Piling several integrated circuits together made it necessary to first ensure an effective insulation between each chip and then drill thousands of perforations in the insular layer to allow vertical connectivity. The prototype of this ‘cube’ is already functioning at the University of Rochester at a speed of 1.4 gigahertz.

Eby Friedman, distinguished Professor of Electrical and Computer Engineering at Rochester University, New York, USA, is the Director of this project and he’s had help in the form of engineering student Vasilis Pavlidis. The chip, which has been specially fabricated at MIT (Massachusetts Institute of Technology), is still in the prototype stage.

and if this comes through


The continuous shrinking of integrated circuits augments speed but connecting multiple chips horizontally means that more space is required. Since all the layers act like a single system, the Rochester chip functions like a folded-up circuit board. Imagine the motherboard of you computer shrunk to the size of a Rubik’s cube. Besides, the architecture of the cube is such that it could increase the speed of your iPod or cell phone by up to ten times that of chips today. More height means less width, so finally perhaps, we’ll have flatter CPUs, smaller printers, miniscule iPods etc. — and as a result, more space to use around the room. Skepticism has been voiced on whether the industry would take to it well, but we at Digit feel that the future belongs to chipper chips and not whopper circuit boards.

2. PROTEIN SHAKERS: Hard Disks Go Organic
If nature can use proteins to help our brains store memory, why can’t we? If you take a good look at our CDs and DVDs, you’ll realise that they are enhancements of the vinyl record – albeit on a microscopic scale. Perhaps then, our use of synthetic materials is due to end. Memory-based devices made out of biological materials have long since been considered to have the power to process information more quickly and allow more data to be stored than the present-day options available to us. Several experiments in the past have floundered but the challenge draws humankind onwards.

Amiable Aminos
Our traditional hard disks, CDs, etc. are either magnetic or optical data storage systems which are becoming, well, harder disks to put more data into. Rooms full of memory devices seem to be the only way of managing the mammoth databases the world is now dealing with. The most significant advancement in this area has been made by researchers in Japan who have managed to develop a new protein—based memory device.

Koji Nakayama, Tetsuro Majima and Takashi Tachikawa have succeeded in etching or ‘recording’ specific data on a glass slide, using a fluorescent protein. The combined use of light and chemicals effectively stored information that was later ‘read’ and then erased. Thus, recording, playback and deletion — the basic functions of memory storage instruments — were proved possible using biological materials. They define the material as ‘a biological device that enables us to spatiotemporally photoregulate the recording, reading, and erasing of information on a solid surface using protein’.

and if this comes through


The scientists involved in the project have themselves suggested that the technology could be used for biosensing and diagnostic assays. But of importance to us is their third suggestion, that it be utilised for ‘record-erasable soft material’. The possibilities, if this works, are limitless, and a reversal of sorts, where the bio-chip seamlessly entering our bodies to enhance human functionality seems possible in the not-so-distant future. Contemplating the consequences of bio-memory instruments, one immediately fears ‘Terminator 3’ and ‘Matrix’-like dystopian scenarios, but never fear, the research is not even in the prototype stage and has a long way to go before it is proved to work as well as today’s storage instruments. Several other simultaneous efforts towards the protein chip are ongoing and therefore it remains to be seen who will come up with the best product.

3. SENSOR GLOVES: Reckoned Skin
From the calculator watch to the HMDs (Helmet Mounted Displays), we have always been a little impatient and have now firmly begun to believe that a person’s computer should be worn, much like eyeglasses or clothing are worn, and interact with the user based on the context of the situation. In fact, at a time when skin is being treated more and more like cloth, intelligent clothes are one sure-shot way to bring back to clothes their primal status of functional accessories. While laptops and palmtops are steps in this direction, serious advances indicate that the dream may not remain a dream much longer.

Fits Like A Glove
Most technological advancements and breakthroughs, regrettably, emerge from conflict, war and the needs of the military (ARPANET, aviation technology, etc.). The latest example is an intelligent glove. US soldiers in Iraq already use wearable computer systems but lack efficient input devices. Now, a company called RallyPoint, based in Cambridge, MA, has developed a sensor-embedded glove that allows the soldier to easily view and navigate using digital maps, activate radio communications, and send commands without needing to use his hands. This isn’t so great when you consider that several groups have been working and bringing out sensor filled gloves in the past, using accelerometers, gyroscopes, and other high-tech sensors.

However, this one is a little different, because it is more practical, rugged and made for the military. It has been designed in such a way that a soldier can use it to grip an object and still continue to use its electronic capacities. The glove has four custom-built push-button sensors sewn into the fingers. Radio can be activated by the sensors on the tips of the middle and fourth fingers, each finger used to locate a different channel. On the lower portion of the index finger is a tiny sensor that can help change modes, from “map mode” to “mouse mode”. Another sensor, on the little finger, can be used to zoom in (or out) of a map, while in ‘Map Mode’. The same sensor in ‘Mouse Mode’ is a mouse-click button.

and if this comes through


Although it probably really hasn’t been envisaged yet, the glove-computer has immense possibilities for the future of gaming. We all know about the magic of the Apple Motion Sensor, PSP Sixaxis motion detection etc. But with the Glove-computer, the extent of immersion and interaction into the game could increase ten-fold. No more handheld pads or joystick surrogates. Everything you need would literally be in you hands. Now if only they could find a way to make it wireless...

4. ANTI-VIRUS CLUSTER: A Cloud Full Of Silver Linings
If you’ve heard of Web 2.0, no doubt you’ve heard of Cloud Computing — but we’ll tell you anyway. Cloud computing is basically a concept that involves Web 2.0, SaaS and the latest trends in technology to provide seamless, better enriched services using the internet. No self-respecting computer today can get by without a good anti-virus software installed and trying to grapple with the number of trojans, malware, worms and hacker-go-lucky viruses that are trying to infilitrate into your sytem.

Remote Control
How nice it would be if the task of checking the files and documents that you open was done by some software deep in the infinite web, which monitors your PC remotely! Researchers at the University of Michigan developed a new cloud-based approach to antivirus which they call ‘CloudAV’ and which, they claim, can outdo any anti-virus package on the market.

Prof. Farnam Jahanian, professor of computer science and engineering in the Department of Electrical Engineering and Computer Science, along with PhD student, Jon Oberheide and postdoctoral fellow Evan Cooke, evaluated 12 different antivirus software programs (including the popular McAfee, Avast and AVG) by pitting them against more than seven thousand malware samples. They found that, due to the increasingly innovative viruses and the growing complexity of anti-virus software itself, detection of malicious software was really low — about 35 per cent. Besides having several vulnerabilities in the software itself, most of them took about seven weeks on an average to equip themselves against new virus threats that are in circulation on the Web.

Another major drawback in today’s anti-virus packages is that you can’t run more than one of them simultaneously in the same system. CloudAV is a single solution to all of these problems for the following reasons:

  • It analyses potential threats to your system using several different antivirus programs at the same time, thereby significantly increasing the degree of protection for your system.
  • Operates by installing a simple, lightweight software agent in your computer, mobile phone or laptop, which automatically detects a new document or program being opened and sends it to the anti-virus cloud (somewhere on the web) to be analysed.
  • With CloudAV it’s pouring antivirus agents. CloudAV uses 12 different detectors that run parallel to one another, but independently, to tell your computer whether it’s okay to open a particular file.
  • Caches the results so that detection becomes smoother in future.

and if this comes through


The latest irritant in India is frequent virus attacks on our cell phones. Typically, cell phones lack the space and power to accommodate bulky antivirus software. Leaving the job of detection and quarantine to an external agent — and not just one, but twelve — would be a boon for users of mobile computing devices. For the rest of us too — PC users — we’ll stop cursing our favourite AV vendor for the viruses that weasel in and start praising CloudAV.

5. TELESCOPIC PIXELS: Mirror Writing
As alternatives to CRTs are becoming cheaper, more than half the globe has switched over to LCD monitors or TVs — not to mention the ubiquitous TFT screens in our cell—phones and PDAs. Naturally, we have also begun to perceive the manifest errors and glitches in using LCD technology. Not to rest on their laurels, scientists have already begun investigating possible new technologies to replace LCD screens. And this time, they are doing it with mirrors.

Whether it’s LCD, Plasma or CRT screens, we’re stuck with pixels. Pixels — short for ‘Picture Elements’ — are the tiny dots that make up the images on a screen. To cut a long story short, the quality and accuracy of the image is determined by the ‘resolution’ of the screen, so the greater the number of pixels, the sharper the image.

Even though LCD screens are all the rage, there are several drawbacks that noticeably hinder the achievement of a truly high-quality image:

  • The pixels in an LCD screen do not really turn completely off.
  • It’s virtually impossible to view the image on a TFT/LCD screen in natural ambient light.
  • When images move fast, the pixels take about half a second to switch between colours, and when these are very different, this leads to momentary blurs.
  • Dead or stuck pixels which are damaged in such a way that they permanently stay in the or off state, seriously affect visual accuracy.
  • Finally, by the time light passes through the three stages of an LCD screen (the polarising film, the liquid—crystal coat and the colour filters) almost 90 per cent of the light is lost, making the screen appear blacker and the displayed image dark.

Microsoft To The Rescue
Researchers at Microsoft have come up with a terrific new design for pixels (published online in Nature Photonics, 20th July, 2008) in which each individual pixel is made up of two opposing microscopic mirrors with one changing shape based on applied voltage, and reflecting light through a hole on the primary mirror and onto the display screen. Both mirrors are made of aluminium and the first one, with a hole in the center, is only a 100 micrometres wide and 100 nanometres thick!

When the pixels are ‘off’, both mirrors reflect the light back on to the source, so none emerges on the other side of the screen. However, when they’re switched on, the disk bends towards a transparent electrode (typically made of indium titanium oxide) due to a little application of voltage. The light therefore bounces towards the second mirror and emerges through the hole.

and if this comes through


Michael Sinclair, senior researcher in the Hardware Devices group — under the direction of Turner Whitted at Microsoft — is convinced that once the design makes it past the prototype stage, it will replace conventional display units all over the world. Less powerful backlights would be necessary and this would bring down costs, while increasing the longevity of the battery on your cell phone or laptop. The telescopic pixels allow about 36 per cent of the light through, increasing brightness by three to six times as compared with the present-day LCD technology.
Just as happened with CRT monitors, people are going to sooner or later hope to get some more space to use on their shrinking office desks and the Telescopic Pixel technology could shrink the width of the screen to the thinness of a whiteboard. As the design is simple and the materials are cheaper, the fabrication as well as price should be substantially easier on the pocket. One possible drawback could be the mechanical nature of the parts — mechanical parts tend to wear out and break down — which may raise maintenance issues, but the positives far outweigh this single danger. So, though we’re not holding our breath, we’re definitely looking forward to the quick development and commercialisation of the Telescopic Pixel Screen.

6. SENSITIVE ARTIFICIAL LISTENERS: Sensitising Machines
We’re already quite familiar with our computers interacting through auditory means — voice commands, text-to-speech software, etc. — but most of us get a bit bugged by the monotony of the electronic voice speaking back to us. Science fiction writers have always dreamt of computers becoming emotional or talking like people (there was even this computer which fell in love in the 1984 Hollywood movie ‘Electric Dreams’). The fiction may be inching towards fact with the Sensitive Artificial Listener system (SAL) being developed by an international team including Queen’s University, Belfast.

Making Human Inputs More Acceptable To Machines
Humans do not only communicate through words. Non-verbal communication, in fact, is supposed to constitute more than 90 per cent of our oral interactions. Computers, however, only understand crystal clear commands and the ambiguity, fluidity and variant significations of body language or facial expressions has so far been beyond machines.

Using a unique blend of science, ethics, psychology and linguistics, scientists are attempting to overcome this obstacle too. SEMAINE (Sustained Emotionally colored Machine-human Interaction using Nonverbal Expression) is a project undertaken by an international group of technologists led by DFKI, the German centre for research on Artificial Intelligence and including Imperial College, London, the University of Paris, the University of Twente in Holland, Queen’s University, Belfast and the Technical University of Munich. The team, with a European Commission Grant of 2.75 million euros, aims to create a Sensitive Artificial Listener (SAL) system, which will perceive a human user’s facial expression, gaze, and voice while interacting with him or her. Just as humans do, the system will alter it’s own tone, behaviour and actions according to the non-verbal stimulus it receives (and actually perceives) from the user. For the first time in history, this project to create a machine-human interface system is using fields as diverse as psychology, linguistics and ethics at every step of its endeavour.

and if this comes through


Professor Roddy Cowie, from the School of Psychology, at Queen’s University gives a timeline of about 20 years. But given the scale and enthusiasm of the SEMAINE project, and given that several such projects are underway all over the world, Digit hazards a guess that we’ll be chatting up to and making jokes with our computers quite routinely within the next decade. And then, perhaps, they may just end up replacing dogs as man’s best friend.

7. WIRELESS ELECTRICITY: Power’s In The Air
If phones, mice and keyboards could get wireless, why not everything else? In fact, about a hundred years ago, that untamed genius, Nikola Tesla had already begun to build a tower at Wardenclyffe, N.Y. to demonstrate the transmission of electricity without the use of wires.
On a humbler scale, researchers at MIT are in the process of repeating the experiment with their own ideas and less ostentatious techniques.

WiTricity
Marin Soljacic, Assistant Professor of Physics at the MIT, has spent a considerable number of years trying to figure out how to transmit power without cables. Radio waves lose too much energy during their passage through the air and lasers are constrained by the necessity of line-of-sight. Soljacic decided to use Resonant Coupling, in which two objects vibrating at the same frequency can exchange energy without harming things around them. He used magnetic resonance and along with his colleagues Jon Hoannopoulos and Peter Fisher, succeeded in lighting up a 60 watt bulb two metres away. What they did was this: two resonant copper coils were tied to dangle from the ceiling, two meters away from each other. Both were tuned to the same frequency and one had a light bulb attached to it. When current was made to pass through one coil, it created a magnetic field and the other resonated, generating an electric current. And then there was light. The experiment succeeded in spite of a thin screen being placed between the two copper coils.

and if this comes through

One of the most obvious results is that we won’t have dozens of cables to trip over in our offices and rooms. Primarily, the aim of this research team is to achieve a cable-free environment wherein your laptops PDAs and mobile phones could charge themselves (with all the electricity floating around) and even, maybe, get rid of the batteries that are so much an essential part of our portable devices today. Magnetic fields interact very weakly with biological organisms and this little fact makes it infinitely safer for us. While this experiment happened about a year ago, the team is still hard at work trying to use other materials so as to increase the efficiency of the transfer of power from 50 per cent to 80 per cent. Once that happens, both, the industry as well as individuals will grab hold of it and never let go.

8. TABLE-SCREENS: Scribbling On The Desk
There isn’t a student alive who hasn’t sometime scribbled his name (or a caricature of his prof.) on his school desk. How much more exciting it would have been if your desk was actually a Graphical User Interface! Experts at Durham University are aiming for just that with their ‘Smart-Desk’ initiative.

Interactive Surfaces
The Active Learning in Computing department at Durham University, UK is designing interactive multi-touch desks at their TEL (Technology-Enhanced Learning Research Group), hoping to replace the traditional desk with cell-phone-like touch-screens which can act like a multi-touch whiteboard, a keyboard, and mobile screen that several students can use at the same time. Dr Liz Burd and her team have linked up with private enterprises to design software that will enable all these surfaces to be networked and connected to a main smartboard. The computer becomes a part of the desk.

And if this comes through. . .


Instant visual displays of topics being discussed, on-screen interactive mathematics, group efforts in problem-solving and more involvement of students in the task at hand — the possible advantages of the smart desk to teachers and students seem infinite. Students who tend to isolate themselves or resist participation in class would be coerced to interact. Teamwork would be a natural consequence with multiple users on single screens. Each student could be presented a task or problem according to his or her individual capacities. More active and creative tasks would replace passive listening. The team aims to fill all schools in the UK with these desks within a decade and keeping in mind the pace of technological advance in India, we should see at least some of the schools in the country doing the same in the near future.

9. WRAP-AROUND COMPUTERS: Open New Folder
Flat screens are in. But what if we could have screens folded or curved around any surface that was convenient to reach? What if animated billboards could be folded round the pole of a street light? What if you could watch your favourite movie by stretching the screen over the back of a chair?

Bendable and flexible electronics are already all over the news. Trouble is, most of them cannot be tied up or wrapped around uneven surfaces or complicated shapes. Nanotechnology (what else?) has the answer.

Elastic Computing
Takao Someya, professor of engineering at the University of Tokyo, along with his team of researchers has added carbon nanotubes to a polymer with high elasticity to make a conductive material. They then used this to connect organic transistors in a stretchable electronic circuit. To induce conductivity in this material, Someya and his team combined several single-walled carbon nanotubes with an ionic liquid, which took the form of a black paste. This substance was then added to liquid polymer, which was dried after being poured into a cast. This material could be used to make an ‘electronic skin’ for robots. As a result, the nanotubes were evenly spread in the material and these worked to form a network that permitted electrical signals to pass through in a controlled manner. To make this material more stretchable, it was perforated into a net and then coated with a material with a silicone base.
In a paper published in ‘Science’ magazine, Someya reported that the material has the highest conductivity among soft materials in the world. Besides, the material is able to stretch up to about 134 percent of its original shape.

and if this comes through

Mass production of nanotubes would, in turn, assist in the bulk production of these elastic conductors. The new material could be used to make displays, actuators or computers. With foldable keyboards already in the market for quite a while, a stretchable screen would make carrying around your laptop infinitely easier. It won’t be long before you’ll reach into your pocket for a handkerchief and pull out your PC. Look before you sneeze.

10. ULTRA—COMPRESSED MUSIC FILES: Micro-MP3
The Apple iPod (160 GB) can hold about forty thousand songs and, yes, people are buying it. As the capacity of MP3 players increase, strangely, our list of must-have favourite songs also expands exponentially. It doesn’t matter how many songs you’ve got with you, the song you want is always elsewhere. So for those of you out there, who, have a million favourite tunes, never fear, Rochester’s here. Again.

Zipped A Thousand Times
Researchers at the University of Rochester have succeeded in digitally reproducing a piece of music in a file that is almost 1,000 times smaller than a regular MP3 file. Mark Bocko, professor of electrical and computer engineering, and his team announced this achievement, at the International Conference on Acoustic Speech and Signal Processing in Las Vegas, held on April 1, 2008.
Even though the results are not perfect, they are almost so. The team took as a sample, a short musical piece, a 20-second solo on a clarinet and compressed it to less than a kilobyte. The file was then replayed by a combination of physics and the knowledge of how a clarinet works. They fed into the computer everything about clarinet playing — including the movement of the fingers, the pressure on the mouthpiece, etc. — to create a virtual clarinet based on real-world dimensions and parameters. They then made a virtual clarinet player for this virtual clarinet by feeding in a model that tells the computer how the human player interacts with the instrument including the fingerings, the force of breath, and the pressure of the player’s lips to determine how they would affect the response of the virtual clarinet.

and if this comes through...

Not only does this imply the possibility of ultra-compressed music files but also the incredible prospect of recording the performer along with performance. Once a computer figures out the typical style of a player, his every breath and movement, it could play a tune much after the player’s gone. According to Professor Bocko, improvement in quality is inevitable as the algorithms become more accurate and acoustic measurements are further perfected. The day won’t be far when your cell phone will hold all the music ever produced in the whole wide world – and the movies too.

Internet Bus of Google

Hi all,

We have used google mainly as a search engine to search for our needs.

But here is another surprise from google.

Google has recently introduced internet Bus which will travel across various cities in tamilnadu. The bus is custom built and broadband internet connection enabled. The main motto of Google introducing this Internet bus is to enable remote people and villagers get an idea of what is internet and how it is useful in their daily life.

The route that the bus follows is shown below

03 Feb Chennai
05 Feb Vellore
06 Feb Krishnagiri
07 Feb Salem
11 Feb Erode
12 Feb Tiruppur
14 Feb Coimbatore

18 Feb Dindigul
19 Feb Madurai
23 Feb Tirunelveli
25 Feb Nagercoil
27 Feb Tuticorin
02 Mar Pudukkottai
03 Mar Tiruchirappalli

06 Mar Thanjavur
08 Mar Kumbakonam
10 Mar Neyveli
12 Mar Cuddalore
13 Mar Tiruvannamalai


for more details visit


Internet Bus

Tuesday, February 3, 2009

Anna University-College reopening date


ANNA UNIVERSITY AND ITS AFFILIATED COLLEGES ARE OPENING ON 9TH OF FEB, 2009 MONDAY

Anna University

Anna University is one of the oldest known colleges/institutions in Chennai.
Situated in Guindy, as the name suggests, it had been established following the great Aringnar Anna. Anna university has many financial colleges affiliated to itself.(One of it is Jeppiaar Engineering College in which I am studying he hee). It is said that the one who gets a degree from Anna University is of very much value and can get a good job. But the situation is so pathetic here that already men who are in various disciplines are losing jobs and students who are pursuing their courses in anna university and Anna University affiliated colleges are suffering from lack of placements, meaning that they are in tight situation of getting a successful job. Those who have already obtained the placement are also in a tight situation of not getting placements. The industry will gradually improve, but one thing is for sure. A degree is required for all freshers to get a job. So try to get a degree before you apply for a job..


Also be sure to check out my other blogs.

Monday, February 2, 2009

How to apply for google adsense



Hello everybody,

Everyone must be wondering to earn from internet.

Now google makes everyone's life easy to earn from Internet.

Yes, by google adsense. almost everyone can earn from internet.

Everyone must be wondering as to how to apply for google adsense.

Now i can share my views with you as to getting the approval for adsense easy.

Please be sure to follow these steps.

Step 1: You must get your own domain (like .com,.co.in etc)...In general, you must own the domain. .com or .co.in is usually preferred and doesnt cost more than 10$ per year.

Step 2: Apply for google adsense.

Here's the link.

Application for google adsense


Once you have applied, provide your correct address and contact details and wait for the confirmation mail from google. Usually google approves all applications within a day or two but it must comply with their terms and conditions.

Please stick on to the Terms and conditions or once if your application is rejected for some reason or other, then it is very difficult to get approval again.

Terms and conditions


After you get approval, log in into your google adsense acccount.

You can login here

Google Adsense login

Then, start earning using blogger.

Blogger login

then start posting about anything and designing your blog is totally free. But be sure not to provide sexual or porn material in any page of your blog or google will reject your blog and also cancel any payments that you have accumulated. will post on how to get success with adsense shortly.

Be sure to use your own publisher id from google adsense in your account or for the google ads that are on your site without your publisher id, someone else will be earning by placing google ads on your blog. So be careful.

Happy blogging and earning!!!

Bidvertiser