Generally, the better a jacket is at keeping the wet out, the better
it is at keeping the wet in. While the wind whips and the rain pelts,
you stay dry … until your body temperature climbs and that muggy,
wrapped-in-plastic feeling sets in.
Waterproof-yet-breathable fabric technologies are abundant in the
outerwear world, but even with rain jackets marketed as
“ultra-breathable” (as most are) some condensation still builds up
inside the shell, leaving you clammy, wet, and wondering why you wore a
jacket in the first place.
With its new Jammu jacket,
The North Face is the latest manufacturer to tackle this elusive
unicorn of a truly breathable, truly weatherproof piece of outerwear.
The special sauce inside the Jammu is Polartec’s NeoShell membrane, which claims to possess the industry’s highest levels of clam-free waterproofing.
The Jammu is an expensive jacket ($400), but the price is made less
painful by a handsome look and a smart design. The cut is just generous
to allow for light layering underneath, while the soft shell fabric
provides enough stretch to keep you from feeling claustrophobic.
Pockets placed high on the body allow easy access when wearing a pack
harness, and the helmet-compatible hood adjusts to fit anybody’s dome.
It seems best suited to hiking, camping, skiing or snowboarding in
above-freezing conditions.
The outer layer is complemented by a soft fleece inner lining that
gives the jacket a warm, cozy feel. Sandwiched between the two is
Polartec’s NeoShell fabric. Thin, waterproof, and super air-permeable,
Polartec claims the NeoShell membrane pulls sweat vapor out of the
jacket at an extremely low pressure. The result? Heat and moisture
supposedly get vented before you get a chance to feel it.
Even though every foul-weather jacket on the market claims to be the
most breathable ever, the fact is, it’s pretty hard to test how much
moisture a jacket retains when you’re out in the field. Many of the
“most breathable jackets in the world” feel very much the same, and
most of my tests end with a limp “meh.”
On a testing day that began with rain that turned into snow and
ended with an inexplicably hot sun, the Jammu eliminated “meh” from my
lexicon.
The outer shell is remarkably tough, especially given how soft it
is. Rain beaded up and rolled off, and snow brushed off without
soaking. Foliage couldn’t scuff it; I waded through shoulder-high brush
several times, but the tough polyester exterior looked as good as new
by the end of the day.
And, as it turns out, the Polartech NeoShell layer is the secret to
the Jammu’s success. I built up a decent amount of heat chugging uphill
with a 30-pound pack, but the jacket never felt overly hot. The sun
eventually poked its head out, but I didn’t want to take off my pack to
strip off the Jammu, so I just opened up the pit zips and kept going. I
never felt anywhere near clammy.
On a sunny, windy day hike in Lake Havasu, the Jammu repelled
20-plus mile-per-hour gusts all day. But despite the afternoon sun, I
never overheated. I was even able to run the last mile down the trail
without feeling like I was wearing a sauna suit.
WIRED Breathes amazingly well. Blocks rain like a
hard shell. Excellent fit. Fuzzy lining adds warmth and soft touch.
Performs well in a multitude of conditions.
TIRED Costs more than a PlayStation 3. No media cord ports. At 1 pound, 10 ounces, it may be too heavy for ultralight backpackers.
News World
Feb 29, 2012
Feb 28, 2012
Software Technology :: Tomahawk, the Most Important Music App Nobody’s Talking About
In order for a technology to take off these days, it has to be simple. Twitter, Facebook, iTunes, Spotify — each can be summed up in a sentence or so and readily understood from the very first time you use it.
Tomahawk is more complicated, but if you’re a music fan who listens to music on a laptop or desktop — and has friends who do, too — it warrants a try, and possibly a place in your quiver of favorite music apps.
First if not foremost, Tomahawk is a media player along the lines of iTunes or Winamp, which can play the music stored on your computer. The fun starts when you install Tomahawk’s content resolvers, which are basically plug-ins that can find music to play in a bunch of other different streaming services, using their search APIs (application programming interfaces) — Spotify, Official.fm, YouTube, Bandcamp, Grooveshark and others.
Whenever you try to play a song, Tomahawk might use any combination of these sources to provide the audio. For playing your own locally stored music, that’s a fairly useless feature. You already have the song, so why would you want to play it in Spotify instead? However, Tomahawk gets more useful when you’re trying to play stuff you don’t already have — for example, a playlist from a Tomahawk-using friend.
“When I want to play a song, or somebody sends me a song, they’re not sending me a song — they’re sending me the metadata about that song — artist, track, possibly the album,” explained Tomahawk open source contributor Jason Herskowitz. “Then, on my side, Tomahawk says, ‘OK, out of all the content sources that you have access to, what’s the best match?’”
Within the same playlist, Tomahawk might grab one track from your local machine, another from your friend’s machine, a third from YouTube and a fourth from Spotify. After all, you don’t care where that music lives; you just want to hear it.
Note that Tomahawk can play tracks from your Tomahawk friends’ computers, which makes Tomahawk a P2P streaming client with which you can listen to your friends’ collections, tap into your work computer’s music from your home computer, and so on.
Tomahawk is much easier to use now than it was back when we wrote our lengthy tutorial on how to use it, but it still requires a small degree of technical sophistication. The main hurdle: installing the content resolvers, which are the plug-ins that let Tomahawk hook in to YouTube, Spotify and the rest.
It’s easy enough. You can either go to the Tomahawk page and choose the resolvers you want from the list, pictured to the right — or (this is easier) just go to Tomahawk > Preferences > Resolvers and install them from there. (In the case of Grooveshark, Spotify and any other unlimited music subscriptions, you’ll need to be a premium subscriber in order for it to work.)
“Local network” lets you play songs from other computers on your own home network, the same way you can in iTunes. “Extended network” lets you tap into your friends’ collections on their computers, so that if you don’t have a song on a given playlist, it plays from their machine. (They need to be running Tomahawk at the time.)
The latest version of Tomahawk (0.3.3) includes the ability to listen along in real time with your friends and make radio stations that resolve to any sources to which you have access. (The latter uses technology from The Echo Nest, publisher of Evolver.fm.) It also includes nice extras like the ability to choose only high-quality music from YouTube.
Now for the $64 million question.
“How does Tomahawk plan to make money?” asked an audience member at NY MusicTech Meetup.
Herskowitz replied, “We don’t.”
Audience member: “So, why do you….”
Herskowitz: “Tomahawk is an open source project that we work at out of the goodness of our hearts and a passion to solve this problem: All of the media players that have been around for 10 years were built to solve problems of 10 years ago. We don’t need [CD-R] label-makers, we don’t need to print CD cases, we don’t need to worry about a lot of things that old players like Winamp, which I worked on back in the day, has to worry about.
“The problems that you need to solve today are, you’ve got silos of music everywhere. I’ve got my library of music in Exfm, which I love, I’ve got stuff on Spotify, I’ve got stuff everywhere else, and I’m forced as the user to bounce between interface to interface to interface, and there’s no way on earth that I can listen to a playlist that goes from the Beatles to my cousin’s band to my favorite stuff at home to some live recording that I found. This basically solves that problem. It’s a very user-centric view.”
Indeed. Still, Tomahawk doesn’t have it all. For sending music to Apple AirPlay speakers, for instance, you’ll need to use AirFoil software (at least until OS X adds native AirPlay support later this year).
As for Android and iPhone versions, Herskowitz said, “not yet.”
Tomahawk is more complicated, but if you’re a music fan who listens to music on a laptop or desktop — and has friends who do, too — it warrants a try, and possibly a place in your quiver of favorite music apps.
First if not foremost, Tomahawk is a media player along the lines of iTunes or Winamp, which can play the music stored on your computer. The fun starts when you install Tomahawk’s content resolvers, which are basically plug-ins that can find music to play in a bunch of other different streaming services, using their search APIs (application programming interfaces) — Spotify, Official.fm, YouTube, Bandcamp, Grooveshark and others.
Whenever you try to play a song, Tomahawk might use any combination of these sources to provide the audio. For playing your own locally stored music, that’s a fairly useless feature. You already have the song, so why would you want to play it in Spotify instead? However, Tomahawk gets more useful when you’re trying to play stuff you don’t already have — for example, a playlist from a Tomahawk-using friend.
“When I want to play a song, or somebody sends me a song, they’re not sending me a song — they’re sending me the metadata about that song — artist, track, possibly the album,” explained Tomahawk open source contributor Jason Herskowitz. “Then, on my side, Tomahawk says, ‘OK, out of all the content sources that you have access to, what’s the best match?’”
Within the same playlist, Tomahawk might grab one track from your local machine, another from your friend’s machine, a third from YouTube and a fourth from Spotify. After all, you don’t care where that music lives; you just want to hear it.
Note that Tomahawk can play tracks from your Tomahawk friends’ computers, which makes Tomahawk a P2P streaming client with which you can listen to your friends’ collections, tap into your work computer’s music from your home computer, and so on.
Tomahawk is much easier to use now than it was back when we wrote our lengthy tutorial on how to use it, but it still requires a small degree of technical sophistication. The main hurdle: installing the content resolvers, which are the plug-ins that let Tomahawk hook in to YouTube, Spotify and the rest.
It’s easy enough. You can either go to the Tomahawk page and choose the resolvers you want from the list, pictured to the right — or (this is easier) just go to Tomahawk > Preferences > Resolvers and install them from there. (In the case of Grooveshark, Spotify and any other unlimited music subscriptions, you’ll need to be a premium subscriber in order for it to work.)
“Local network” lets you play songs from other computers on your own home network, the same way you can in iTunes. “Extended network” lets you tap into your friends’ collections on their computers, so that if you don’t have a song on a given playlist, it plays from their machine. (They need to be running Tomahawk at the time.)
The latest version of Tomahawk (0.3.3) includes the ability to listen along in real time with your friends and make radio stations that resolve to any sources to which you have access. (The latter uses technology from The Echo Nest, publisher of Evolver.fm.) It also includes nice extras like the ability to choose only high-quality music from YouTube.
Now for the $64 million question.
“How does Tomahawk plan to make money?” asked an audience member at NY MusicTech Meetup.
Herskowitz replied, “We don’t.”
Audience member: “So, why do you….”
Herskowitz: “Tomahawk is an open source project that we work at out of the goodness of our hearts and a passion to solve this problem: All of the media players that have been around for 10 years were built to solve problems of 10 years ago. We don’t need [CD-R] label-makers, we don’t need to print CD cases, we don’t need to worry about a lot of things that old players like Winamp, which I worked on back in the day, has to worry about.
“The problems that you need to solve today are, you’ve got silos of music everywhere. I’ve got my library of music in Exfm, which I love, I’ve got stuff on Spotify, I’ve got stuff everywhere else, and I’m forced as the user to bounce between interface to interface to interface, and there’s no way on earth that I can listen to a playlist that goes from the Beatles to my cousin’s band to my favorite stuff at home to some live recording that I found. This basically solves that problem. It’s a very user-centric view.”
Indeed. Still, Tomahawk doesn’t have it all. For sending music to Apple AirPlay speakers, for instance, you’ll need to use AirFoil software (at least until OS X adds native AirPlay support later this year).
As for Android and iPhone versions, Herskowitz said, “not yet.”
Feb 27, 2012
Remote Technology :: Access Sensitive Data Remotely
A virtual private network (VPN) is a network that connects a remote
PC — your laptop beside the pool in Shanghai — to a central network
elsewhere, like your employer's secure network back home.
Think of VPNs as tunnels; secure tunnels that your sensitive data can travel through on its way to its destination.
The purpose of a VPN is to allow you to access your data from anywhere without any sacrifice of security. With a VPN connection, everything seems as though you are simply connected directly to your network, regardless of where in the world you might actually be.
Running over a Wi-Fi network, a well-secured VPN session will appear as a standard https connection to anyone watching the packets as they come and go. That means it will defeat most surveillance, interception and data theft.
Sounds good, right? Here's our guide to setting up your own VPN for secure connections wherever you go.
Think of VPNs as tunnels; secure tunnels that your sensitive data can travel through on its way to its destination.
The purpose of a VPN is to allow you to access your data from anywhere without any sacrifice of security. With a VPN connection, everything seems as though you are simply connected directly to your network, regardless of where in the world you might actually be.
Running over a Wi-Fi network, a well-secured VPN session will appear as a standard https connection to anyone watching the packets as they come and go. That means it will defeat most surveillance, interception and data theft.
Sounds good, right? Here's our guide to setting up your own VPN for secure connections wherever you go.
Hardware Technology :: PlayStation Vita Review: Finally, Console-Level Gaming in a Handheld Device
The Sony PlayStation Vita officially launches today, bringing with
it over two dozen games and a host of promises. Without a new version
of the PlayStation console announced, Sony is clearly counting on the
PS Vita to restore some of the prestige lost in the gaming world with
the troubles dogging its PlayStation Network. Whether that will happen
remains to be seen, of course, but I can say that the Vita is a
remarkable achievement in handheld gaming devices.
It’s nothing if not sleek, small enough to fit in a pocket (albeit a fairly big one) but with a screen that can’t help but remind one of the iPhone 4′s Retina display, only bigger — though the Vita’s screen has slightly worse resolution than the Retina, at 960 x 544 the difference is largely unnoticeable.
Vita’s tight design and relative lack of moving parts work to enhance its durability. Not only have I let my 9- and 11-year-old kids play with it, but they and I have dropped it a few times and it still looks brand new. It fits comfortably in two hands, with miniaturized versions of the PlayStation controls that work very well, even if using the tiny dual analog joysticks did make my hands cramp up after a while. But I have unusually large hands, so your mileage may vary.
Having touch capabilities on the back of the Vita as well as on the front display is an interesting innovation, one which I found cumbersome at first but gradually grew able to handle with reasonable adeptness. The front and back cameras are low-res enough that nobody is likely to use them much for taking photos or videos, but serve very well in their primary function: enabling the augmented-reality feature of the device. Top all that off with an ARM Cortex-A9 quad-core processor, a quad-core graphics processor and 512 megabytes of RAM and you’ve got a powerhouse of a handheld. To put that all into perspective, it has twice as much memory as the PlayStation 3 and more computing power than the iPad 2.
The PS Vita does more than just play games. It comes with a web browser, but one you’re only likely to use for quickly looking something up as it’s pretty mediocre by today’s standards. Google Maps is also included, which works pretty much as you’d expect if you’ve ever used it on a smartphone or tablet. Though the GPS seemed pretty accurate, I don’t see this being a widely used app — I just can’t imagine too many scenarios where it would be easier to pull out your Vita than your smartphone, although I can see it being useful for people without smartphones.
Vita has an app called Near, which adds a social aspect to the device by showing you nearby Vita users, what they’re playing and what trophies they’ve won. As I only had one Vita to try out, I wasn’t able to test this app, but I understand that it does maintain privacy standards. The device also comes with a content manager, which is a well-designed app that allows you to transfer information between the Vita and a PS3 or computer. And then there’s the remote play feature, which was notoriously poorly implemented on the PSP. I was only able to get it to work a little bit and then really slowly, but Sony has promised that it will improve dramatically shortly, especially after more PS3 games come out that enable the feature.
You can also watch videos and listen to music on the Vita, and it seems to do just fine at both, but it’s no serious threat to the better smartphones with regard to either.
But, let’s face it, nobody is going to buy the Vita for any of those things. The clear selling point of this device, and Sony clearly knows this, is the games. It’s been discussed here on GeekDad and elsewhere, but Uncharted: Golden Abyss is the clear star of the launch-day lineup and one that demonstrates better than any other game I tried out (including the Welcome Park app that comes loaded on the device) how good a job the Vita does in providing a great gaming experience.
Let me put it this way: Despite having over a dozen other games to try out, I had to play Uncharted all the way to the end. I found myself getting immersed in the game in a very similar way to when I was playing the console games, in which the device I was playing the game on became just part of the experience. It really did feel that natural, and that’s as much a testament to the device’s features as it is to the game’s designers and developers for taking advantage of them.
Honestly, the other games were a bit hit and miss, though there were some very good ones. But it was Uncharted that really made me a true believer in the PS Vita. If a series as rich and cinematic as that can have a handheld installment that stands right up there with the console installments, then so can any other series. (Check back here on GeekDad in the next week or so to read my full reviews of selected games from the opening Vita lineup.)
With all the games available to play already, it would be easy to overlook the Vita’s operating system, but it could be argued that the OS is one of its greatest strengths. It allows you to run and effortlessly switch between up to five apps at once. Want to pause your game to check out who’s nearby and send a friend request? Just press the PS button under the left analog stick, scroll through the apps screens with a flick of your finger just as you would in iOS or Android, tap the Near app, send the request then swipe left or right to the game’s screen and tap “Continue.”
The only delay in this process is how long it takes you to find your new potential friend in Near, because everything else is virtually instantaneous. Really: Even pausing and restoring a game as big as Uncharted was seamless, dropping me back into the game as though Nathan Drake had only blinked. I tried doing this in every game I played, and in the middle of all kinds of processor-intensive scenes, and not only did none of the games crash, but every one of them restored perfectly. If there are any glitches in this OS, I wasn’t able to find them, and I’m usually pretty good at that sort of thing.
The Vita has its weak points, though. The battery life is probably the worst: I wasn’t able to play for more than three hours in any game without getting a low-battery warning. Giving it a full charge only took about two hours, though, which isn’t bad. Storage is another issue: The Vita carries no internal storage, presumably as a way to keep prices down, and memory cards for it are proprietary and expensive.
And it has its annoyances: For me, the biggest is that the cover to the slot where game cards go was really difficult to open. If you don’t have long fingernails, you’ll need to either leave the cover open or carry some kind of small, thin object with you. Honestly, I don’t see how the design for this made it to the final product when the rest of the Vita seems so well thought out. I had to resort to keeping a small, thin knife next to me for this purpose, and prying the cover open very carefully, because literally nothing else I tried worked. Even a dime was too thick for the purpose, as were the edges of the game cards and the boxes they came in. If I want to take my Vita on a plane, I’ll have to come up with some other idea, since I have a feeling the TSA will not accept “I need it to open my handheld gaming device” as a valid reason for bringing a knife on board.
All in all, the PS Vita really has managed to bring console-level gaming to a handheld in a way nobody has done before. I love my iPhone 4 and my iPad 2, and I play games on them all the time, but the iPhone screen is too small to let you forget you’re using a phone and the iPad is too big to fit naturally in your hands for long periods of time. There are very few pockets capable of holding an iPad, and they are both definitely more fragile than the Vita.
I honestly felt like I had the same kind of immersive experience playing on the Vita that I’m used to having on the Xbox and PS3, only with the added benefits that only a handheld can provide like a touchscreen and gyroscopic movement. And I was able to play on this “console” with my headphones on while my kids watched a program on the main family TV, something I’m not able to do with the actual consoles. And fellow parents will understand how nice it was to be able to play a game rated “Mature” while my kids were in the house and awake, without having to worry they’d see or hear something I’d rather they didn’t.
The PS Vita is available for $250 for the Wi-Fi version and $300 for a bundle with the version that adds AT&T 3G capability and an 8GB memory card. You’ll have to pay for a data plan if you want to use the 3G after the included DataConnect pass runs out. Considering that you can currently get a 160GB PS3 for $250 this may seem a bit pricey, but not horribly so when you consider how much power Sony has packed into so little space. Games run between $30 and $50, which is what you’d pay for any console game and therefore more than you’d pay for games for most handhelds.
WIRED The PS Vita delivers the closest thing yet to a console-level experience in a handheld device, with well-designed handheld features. The rear touch-panel is an innovation I expect to see on other devices soon: While it takes some getting used to, it is fairly natural to use when your fingers are already on the back of the device because your thumbs are on the controls.
TIRED Sony has yet to learn that using proprietary storage media makes them no friends, and this is very much in evidence with the Vita. The battery life could well be enough to keep a lot of people from dropping a few hundred bucks on the Vita, although I’m sure external battery packs will make an appearance soon. And there’s that annoyance of the game card slot cover, which seems like a small thing, but as it’s something you have to use a lot will drive you a little bit nuts if you have the same trouble I did.
CONCLUSION PlayStation Vita is an excellent gaming experience overall, and worth the money. I don’t see the point in paying $50 more for the 3G and then having to get a plan on top of that, so if I were buying one I’d go for the Wi-Fi version — although the 8GB memory card that comes with the 3G system costs a fair bit on its own. (Amazon is offering a deal that gets you a free 4GB memory card with the Wi-Fi version for a limited time.) The battery life is the only thing that might keep me from buying one, but there are so many opportunities to plug devices in these days that I don’t think it’s too huge a deal.
It’s nothing if not sleek, small enough to fit in a pocket (albeit a fairly big one) but with a screen that can’t help but remind one of the iPhone 4′s Retina display, only bigger — though the Vita’s screen has slightly worse resolution than the Retina, at 960 x 544 the difference is largely unnoticeable.
Vita’s tight design and relative lack of moving parts work to enhance its durability. Not only have I let my 9- and 11-year-old kids play with it, but they and I have dropped it a few times and it still looks brand new. It fits comfortably in two hands, with miniaturized versions of the PlayStation controls that work very well, even if using the tiny dual analog joysticks did make my hands cramp up after a while. But I have unusually large hands, so your mileage may vary.
Having touch capabilities on the back of the Vita as well as on the front display is an interesting innovation, one which I found cumbersome at first but gradually grew able to handle with reasonable adeptness. The front and back cameras are low-res enough that nobody is likely to use them much for taking photos or videos, but serve very well in their primary function: enabling the augmented-reality feature of the device. Top all that off with an ARM Cortex-A9 quad-core processor, a quad-core graphics processor and 512 megabytes of RAM and you’ve got a powerhouse of a handheld. To put that all into perspective, it has twice as much memory as the PlayStation 3 and more computing power than the iPad 2.
The PS Vita does more than just play games. It comes with a web browser, but one you’re only likely to use for quickly looking something up as it’s pretty mediocre by today’s standards. Google Maps is also included, which works pretty much as you’d expect if you’ve ever used it on a smartphone or tablet. Though the GPS seemed pretty accurate, I don’t see this being a widely used app — I just can’t imagine too many scenarios where it would be easier to pull out your Vita than your smartphone, although I can see it being useful for people without smartphones.
Vita has an app called Near, which adds a social aspect to the device by showing you nearby Vita users, what they’re playing and what trophies they’ve won. As I only had one Vita to try out, I wasn’t able to test this app, but I understand that it does maintain privacy standards. The device also comes with a content manager, which is a well-designed app that allows you to transfer information between the Vita and a PS3 or computer. And then there’s the remote play feature, which was notoriously poorly implemented on the PSP. I was only able to get it to work a little bit and then really slowly, but Sony has promised that it will improve dramatically shortly, especially after more PS3 games come out that enable the feature.
You can also watch videos and listen to music on the Vita, and it seems to do just fine at both, but it’s no serious threat to the better smartphones with regard to either.
But, let’s face it, nobody is going to buy the Vita for any of those things. The clear selling point of this device, and Sony clearly knows this, is the games. It’s been discussed here on GeekDad and elsewhere, but Uncharted: Golden Abyss is the clear star of the launch-day lineup and one that demonstrates better than any other game I tried out (including the Welcome Park app that comes loaded on the device) how good a job the Vita does in providing a great gaming experience.
Let me put it this way: Despite having over a dozen other games to try out, I had to play Uncharted all the way to the end. I found myself getting immersed in the game in a very similar way to when I was playing the console games, in which the device I was playing the game on became just part of the experience. It really did feel that natural, and that’s as much a testament to the device’s features as it is to the game’s designers and developers for taking advantage of them.
Honestly, the other games were a bit hit and miss, though there were some very good ones. But it was Uncharted that really made me a true believer in the PS Vita. If a series as rich and cinematic as that can have a handheld installment that stands right up there with the console installments, then so can any other series. (Check back here on GeekDad in the next week or so to read my full reviews of selected games from the opening Vita lineup.)
With all the games available to play already, it would be easy to overlook the Vita’s operating system, but it could be argued that the OS is one of its greatest strengths. It allows you to run and effortlessly switch between up to five apps at once. Want to pause your game to check out who’s nearby and send a friend request? Just press the PS button under the left analog stick, scroll through the apps screens with a flick of your finger just as you would in iOS or Android, tap the Near app, send the request then swipe left or right to the game’s screen and tap “Continue.”
The only delay in this process is how long it takes you to find your new potential friend in Near, because everything else is virtually instantaneous. Really: Even pausing and restoring a game as big as Uncharted was seamless, dropping me back into the game as though Nathan Drake had only blinked. I tried doing this in every game I played, and in the middle of all kinds of processor-intensive scenes, and not only did none of the games crash, but every one of them restored perfectly. If there are any glitches in this OS, I wasn’t able to find them, and I’m usually pretty good at that sort of thing.
The Vita has its weak points, though. The battery life is probably the worst: I wasn’t able to play for more than three hours in any game without getting a low-battery warning. Giving it a full charge only took about two hours, though, which isn’t bad. Storage is another issue: The Vita carries no internal storage, presumably as a way to keep prices down, and memory cards for it are proprietary and expensive.
And it has its annoyances: For me, the biggest is that the cover to the slot where game cards go was really difficult to open. If you don’t have long fingernails, you’ll need to either leave the cover open or carry some kind of small, thin object with you. Honestly, I don’t see how the design for this made it to the final product when the rest of the Vita seems so well thought out. I had to resort to keeping a small, thin knife next to me for this purpose, and prying the cover open very carefully, because literally nothing else I tried worked. Even a dime was too thick for the purpose, as were the edges of the game cards and the boxes they came in. If I want to take my Vita on a plane, I’ll have to come up with some other idea, since I have a feeling the TSA will not accept “I need it to open my handheld gaming device” as a valid reason for bringing a knife on board.
All in all, the PS Vita really has managed to bring console-level gaming to a handheld in a way nobody has done before. I love my iPhone 4 and my iPad 2, and I play games on them all the time, but the iPhone screen is too small to let you forget you’re using a phone and the iPad is too big to fit naturally in your hands for long periods of time. There are very few pockets capable of holding an iPad, and they are both definitely more fragile than the Vita.
I honestly felt like I had the same kind of immersive experience playing on the Vita that I’m used to having on the Xbox and PS3, only with the added benefits that only a handheld can provide like a touchscreen and gyroscopic movement. And I was able to play on this “console” with my headphones on while my kids watched a program on the main family TV, something I’m not able to do with the actual consoles. And fellow parents will understand how nice it was to be able to play a game rated “Mature” while my kids were in the house and awake, without having to worry they’d see or hear something I’d rather they didn’t.
The PS Vita is available for $250 for the Wi-Fi version and $300 for a bundle with the version that adds AT&T 3G capability and an 8GB memory card. You’ll have to pay for a data plan if you want to use the 3G after the included DataConnect pass runs out. Considering that you can currently get a 160GB PS3 for $250 this may seem a bit pricey, but not horribly so when you consider how much power Sony has packed into so little space. Games run between $30 and $50, which is what you’d pay for any console game and therefore more than you’d pay for games for most handhelds.
WIRED The PS Vita delivers the closest thing yet to a console-level experience in a handheld device, with well-designed handheld features. The rear touch-panel is an innovation I expect to see on other devices soon: While it takes some getting used to, it is fairly natural to use when your fingers are already on the back of the device because your thumbs are on the controls.
TIRED Sony has yet to learn that using proprietary storage media makes them no friends, and this is very much in evidence with the Vita. The battery life could well be enough to keep a lot of people from dropping a few hundred bucks on the Vita, although I’m sure external battery packs will make an appearance soon. And there’s that annoyance of the game card slot cover, which seems like a small thing, but as it’s something you have to use a lot will drive you a little bit nuts if you have the same trouble I did.
CONCLUSION PlayStation Vita is an excellent gaming experience overall, and worth the money. I don’t see the point in paying $50 more for the 3G and then having to get a plan on top of that, so if I were buying one I’d go for the Wi-Fi version — although the 8GB memory card that comes with the 3G system costs a fair bit on its own. (Amazon is offering a deal that gets you a free 4GB memory card with the Wi-Fi version for a limited time.) The battery life is the only thing that might keep me from buying one, but there are so many opportunities to plug devices in these days that I don’t think it’s too huge a deal.
Feb 26, 2012
Software Technology :: In the Steps of Ancient Elephants
One day, sometime around seven million years ago, a herd of bizarre,
four-tusked elephants crossed the desert that stretched over what is
now the United Arab Emirates. Thirteen of the behemoths plodded along
together, perhaps moving towards one of the wide, slow rivers which
nourished stands of trees in the otherwise the arid region. Sometime
later, a solitary animal trudged across the herd’s path in another
direction. We know all this because paleontologists have found the
tracks of these massive animals.
Scientists were not the first people to wonder about the fossil footprints. The huge tracksite – which stretches over an area equivalent to seven soccer fields – had been a source of speculation among local Emirati people for years. Dinosaurs and even mythical giants were thought to have been responsible for the potholes. It wasn’t until the spring of 2001 that a resident of the area, Mubarak bin Rashid Al Mansouri, led researchers from the Abu Dhabi Islands Archaeological Survey to the immense fossil field.
Dinosaurs had not created the tracks. The snapshot of time represented by the trace fossils came from the Miocene, sometime between six and eight million years ago — all the gargantuan non-avian dinosaurs had died out over 60 million years previously. Based upon the geological context and what had been found in the area before, fossil elephants were quickly identified as the trackmakers. The site was named Mleisa 1.
Researchers Will Higgs, Anthony Kirkham, Graham Evans, and Dan Hull published a preliminary report on the trackway in 2003. But the full scope of the site has not been understood until now. With the help of a Canon S90 pocket camera rigged up to a kite, a multidisciplinary team of scientists led by Faysal Bibi from the Humboldt University of Berlin and Brian Kraatz of the Western University of Health Sciences have finally been able to stitch together a brief glimpse into the social lives of prehistoric elephants. The team published their study today in Biology Letters.
The paper presents an direct look at fossil elephant social structure. Such peeks into prehistoric behavior are rare. While many archaic elephant tracks have been found before – going back to about 9 million years ago – these often record the movements of solitary animals. No one had ever found traces left by an entire herd before. The Mleisa 1 trackway is truly exceptional.
Based upon the assembled photograph of the site, Kraatz and co-authors counted at least thirteen elephants of different sizes in the herd. Exactly which species of prehistoric elephant they belonged to is unknown. At least three different elephants existed in the area at the time,but, based upon fossil abundance and the paleoecology of the elephants, the researchers suggest that the tracks were created by Stegotetrabelodon. Although roughly the same size as modern elephants, this kind of proboscidean had a long, low skull with four conical tusks jutting out of its jaws.
That these animals were probably moving together is revealed by the organization of the tracks. “The consistent preservation of the prints in the herd and their close parallel orientation,” Kraatz said, indicates that the tracks “were all created then desiccated at around the same time.” This major trackway stretches for over 190 meters. And there’s another, even longer trackway at the site. A 260 meter long trail records the movements of a single, large individual sometime after the herd passed by.
Was the large herd primarily composed of females and led by a matriarch, like modern elephants? That is difficult to determine. The tracks themselves do not offer definitive evidence of sex. But Kraatz and co-authors suggest that the prehistoric elephants had a social structure similar to their living cousins. Since males of modern elephant species leave their herds when they reach sexual maturity, the same might have been true of the prehistoric species. The solitary individual, therefore, might be a male, and the herd might therefore be composed of females.
Lacking soft tissues or even fossil bones to study, size makes all the difference. Since mature male elephants are typically larger than females, a size difference between the solitary animal and the largest member of the herd would be consistent with the idea that the lone elephant was a male.
Frustratingly, though, the study concluded that the lone trackmaker and the largest members of the herd were about the same size. Still, Kraatz pointed out that there might be a few clues that the solitary individual was a male, after all. “The stride length of our solitary individual is longer than any [individual] in the herd,” Kraatz said, and this is consistent with the idea of the animal being a male. Likewise, Kraatz noted, “the left-right print widths of the solitary individual are also wider than any of those in the herd – another indicator that it was bigger.” This would mean that, like modern elephants, males in this prehistoric species left their herds as they became sexually mature and often traveled alone, while females would group together in herds.
The trackway indicates that prehistoric elephants were forming herds by seven million years ago at the latest. And, since Stegotetrabelodon is a relatively distant cousin of modern elephants, herding behavior may have originated at a much earlier time and been shared by various prehistoric species.
“We know that the two elephant species today show female-led family groups, and this study shows that such behavior extends beyond their last common ancestor, if indeed, the track maker was Stegotetrabelodon,” Kraatz said. That is the wonderful thing about fossil trackways. The traces record a few moments of prehistoric time in which we can walk in the footsteps of fantastic prehistoric creatures.
As Kraatz himself put it, “The most-interesting part here, in my mind, is not what the is answer to the question about the antiquity of this behavior, it’s that the fact we could even date it back this far. This is nothing short of amazing considering the difficulties in inferring any sort of behavior from fossils.”
Scientists were not the first people to wonder about the fossil footprints. The huge tracksite – which stretches over an area equivalent to seven soccer fields – had been a source of speculation among local Emirati people for years. Dinosaurs and even mythical giants were thought to have been responsible for the potholes. It wasn’t until the spring of 2001 that a resident of the area, Mubarak bin Rashid Al Mansouri, led researchers from the Abu Dhabi Islands Archaeological Survey to the immense fossil field.
Dinosaurs had not created the tracks. The snapshot of time represented by the trace fossils came from the Miocene, sometime between six and eight million years ago — all the gargantuan non-avian dinosaurs had died out over 60 million years previously. Based upon the geological context and what had been found in the area before, fossil elephants were quickly identified as the trackmakers. The site was named Mleisa 1.
Researchers Will Higgs, Anthony Kirkham, Graham Evans, and Dan Hull published a preliminary report on the trackway in 2003. But the full scope of the site has not been understood until now. With the help of a Canon S90 pocket camera rigged up to a kite, a multidisciplinary team of scientists led by Faysal Bibi from the Humboldt University of Berlin and Brian Kraatz of the Western University of Health Sciences have finally been able to stitch together a brief glimpse into the social lives of prehistoric elephants. The team published their study today in Biology Letters.
The paper presents an direct look at fossil elephant social structure. Such peeks into prehistoric behavior are rare. While many archaic elephant tracks have been found before – going back to about 9 million years ago – these often record the movements of solitary animals. No one had ever found traces left by an entire herd before. The Mleisa 1 trackway is truly exceptional.
Based upon the assembled photograph of the site, Kraatz and co-authors counted at least thirteen elephants of different sizes in the herd. Exactly which species of prehistoric elephant they belonged to is unknown. At least three different elephants existed in the area at the time,but, based upon fossil abundance and the paleoecology of the elephants, the researchers suggest that the tracks were created by Stegotetrabelodon. Although roughly the same size as modern elephants, this kind of proboscidean had a long, low skull with four conical tusks jutting out of its jaws.
That these animals were probably moving together is revealed by the organization of the tracks. “The consistent preservation of the prints in the herd and their close parallel orientation,” Kraatz said, indicates that the tracks “were all created then desiccated at around the same time.” This major trackway stretches for over 190 meters. And there’s another, even longer trackway at the site. A 260 meter long trail records the movements of a single, large individual sometime after the herd passed by.
Was the large herd primarily composed of females and led by a matriarch, like modern elephants? That is difficult to determine. The tracks themselves do not offer definitive evidence of sex. But Kraatz and co-authors suggest that the prehistoric elephants had a social structure similar to their living cousins. Since males of modern elephant species leave their herds when they reach sexual maturity, the same might have been true of the prehistoric species. The solitary individual, therefore, might be a male, and the herd might therefore be composed of females.
Lacking soft tissues or even fossil bones to study, size makes all the difference. Since mature male elephants are typically larger than females, a size difference between the solitary animal and the largest member of the herd would be consistent with the idea that the lone elephant was a male.
Frustratingly, though, the study concluded that the lone trackmaker and the largest members of the herd were about the same size. Still, Kraatz pointed out that there might be a few clues that the solitary individual was a male, after all. “The stride length of our solitary individual is longer than any [individual] in the herd,” Kraatz said, and this is consistent with the idea of the animal being a male. Likewise, Kraatz noted, “the left-right print widths of the solitary individual are also wider than any of those in the herd – another indicator that it was bigger.” This would mean that, like modern elephants, males in this prehistoric species left their herds as they became sexually mature and often traveled alone, while females would group together in herds.
The trackway indicates that prehistoric elephants were forming herds by seven million years ago at the latest. And, since Stegotetrabelodon is a relatively distant cousin of modern elephants, herding behavior may have originated at a much earlier time and been shared by various prehistoric species.
“We know that the two elephant species today show female-led family groups, and this study shows that such behavior extends beyond their last common ancestor, if indeed, the track maker was Stegotetrabelodon,” Kraatz said. That is the wonderful thing about fossil trackways. The traces record a few moments of prehistoric time in which we can walk in the footsteps of fantastic prehistoric creatures.
As Kraatz himself put it, “The most-interesting part here, in my mind, is not what the is answer to the question about the antiquity of this behavior, it’s that the fact we could even date it back this far. This is nothing short of amazing considering the difficulties in inferring any sort of behavior from fossils.”
Operating System Technology :: Why Desktop Apps Would Be Bad News for Windows 8 Tablets
Windows 8 represents a huge departure for Microsoft.
First, the platform is slated to run on both x86 processors for PCs, and on ARM chips for tablets. Second, it’s a single OS platform with two distinctly different user interfaces. You’ll be able to divide your time between the touch-optimized Metro, which borrows its look, feel and navigation from the Windows Phone OS, and a traditional Windows 7-like desktop experience.
On desktop PCs, this dual-interface approach shouldn’t be a problem. Metro doesn’t demand many resources. It should run on PCs just fine.
But there’s still a nagging question: How will legacy desktop applications run on ARM-based tablets, if they run on ARM at all? Desktop apps can be resource hogs, and ARM-based tablets may not have the horsepower to run these programs quickly and elegantly. Not only could application performance suffer, but desktop apps could also suck battery capacities dry.
Well, a bit of clarity is maybe, possibly emerging. A Thursday report says Microsoft might allow a limited number of Windows 8 desktop apps to run on ARM-based tablets.
Microsoft is considering “a restricted desktop for Windows 8 ARM,” sources at The Verge say. Applications would have to earn special certification, and would likely be limited to Internet Explorer and Microsoft Office. This jibes with reports that Microsoft has been working on a lighter-weight version of Office for tablets.
We asked Microsoft to clarify, and received a “no comment.” However, past statements from Windows lead Steven Sinofsky suggest ARM tablets won’t support the desktop component of Windows 8.
“We’ve been very clear since the very first CES demos and forward that the ARM product won’t run any X86 applications,” Sinofsky said to investors at a financial analyst meeting in September. “We’ve done a bunch of work to enable a great experience there.”
So how do developers feel? Alexandre Brisebois, a senior .Net developer at RunAtServer, thinks it would be best for Microsoft to offer the same Metro and desktop interfaces everywhere, on both x86 and ARM devices. Conversely, Darren Baker, the business development director at Sogeti Global (a company that makes custom Windows products for businesses), says offering a desktop interface of any kind could be problematic for new tablet users.
“People would buy an ARM tablet, and think they have this copy of MS office that’s going to run there, but it won’t,” Baker says.
Nonetheless, the Windows 8 dual user interface scheme does offer Microsoft and hardware companies a chance to rethink the tablet space as it exists today. This could lead to a tablet that’s both interesting and well-integrated with other Windows products.
“You’re actually going to see tablets that are focused on a specific kinds of tasks: video acceleration for media, or you have legacy app compatibility for desktop users,” Baker says. “It’s not really about the tablet itself. It’s about what you can enable with the tablet.”
Baker used a theoretical United Airlines app as an example scenario for different use cases:
First, the platform is slated to run on both x86 processors for PCs, and on ARM chips for tablets. Second, it’s a single OS platform with two distinctly different user interfaces. You’ll be able to divide your time between the touch-optimized Metro, which borrows its look, feel and navigation from the Windows Phone OS, and a traditional Windows 7-like desktop experience.
On desktop PCs, this dual-interface approach shouldn’t be a problem. Metro doesn’t demand many resources. It should run on PCs just fine.
But there’s still a nagging question: How will legacy desktop applications run on ARM-based tablets, if they run on ARM at all? Desktop apps can be resource hogs, and ARM-based tablets may not have the horsepower to run these programs quickly and elegantly. Not only could application performance suffer, but desktop apps could also suck battery capacities dry.
Well, a bit of clarity is maybe, possibly emerging. A Thursday report says Microsoft might allow a limited number of Windows 8 desktop apps to run on ARM-based tablets.
Microsoft is considering “a restricted desktop for Windows 8 ARM,” sources at The Verge say. Applications would have to earn special certification, and would likely be limited to Internet Explorer and Microsoft Office. This jibes with reports that Microsoft has been working on a lighter-weight version of Office for tablets.
We asked Microsoft to clarify, and received a “no comment.” However, past statements from Windows lead Steven Sinofsky suggest ARM tablets won’t support the desktop component of Windows 8.
“We’ve been very clear since the very first CES demos and forward that the ARM product won’t run any X86 applications,” Sinofsky said to investors at a financial analyst meeting in September. “We’ve done a bunch of work to enable a great experience there.”
So how do developers feel? Alexandre Brisebois, a senior .Net developer at RunAtServer, thinks it would be best for Microsoft to offer the same Metro and desktop interfaces everywhere, on both x86 and ARM devices. Conversely, Darren Baker, the business development director at Sogeti Global (a company that makes custom Windows products for businesses), says offering a desktop interface of any kind could be problematic for new tablet users.
“People would buy an ARM tablet, and think they have this copy of MS office that’s going to run there, but it won’t,” Baker says.
Nonetheless, the Windows 8 dual user interface scheme does offer Microsoft and hardware companies a chance to rethink the tablet space as it exists today. This could lead to a tablet that’s both interesting and well-integrated with other Windows products.
“You’re actually going to see tablets that are focused on a specific kinds of tasks: video acceleration for media, or you have legacy app compatibility for desktop users,” Baker says. “It’s not really about the tablet itself. It’s about what you can enable with the tablet.”
Baker used a theoretical United Airlines app as an example scenario for different use cases:
“If they develop an app for end users, it’s going to be ‘I need to use this to get information quickly, then move on with my life.’ They will develop an app for the Metro UI. It’ll launch, get you flight details, and then you can go on to what you want to do next. If an app’s geared toward someone who’s sitting at a desk, they may not need the Metro UI at all, just the desktop interface. Then they could just have a Metro tile that gives you Metro information as needed.”With the beta of Windows 8 coming out later this month, Microsoft’s tablet plans should soon be revealed. But for now, only one things’s for sure: Regardless of whether the desktop U.I. will appear on ARM tablets, Windows 8 tablets will offer a dramatically different alternative to Apple and Android tablets.
Software :: Microsoft Office for iPad: Why You’ll Need It, How You’ll Use It
Microsoft Office has been a desktop computer staple for decades, and
now it looks like it might finally migrate to modern touchscreen
tablets. But does Microsoft’s mouse- and keyboard-dependent
productivity software even belong on a tablet? And if it does make the transition to touch, how will we actually use it?
Yesterday, a report by staff of The Daily claimed that Microsoft Office for iPad apps are definitely in the works, and could be released “in the coming weeks.” The story included photos and descriptions of a purported hands-on demo. Microsoft representatives were quick to shoot back both on Twitter and in an official statement stating The Daily had its facts wrong and that its reporters had not, in fact, seen an actual Microsoft product on the tablet.
Nonetheless, The Daily‘s Peter Ha later insisted that a working version of the app was demoed to the digital publication by a Microsoft employee. It’s a he-said-she-said situation, but at least one key industry watcher feels Office for iPad makes sense.
“I can say that based on the products Microsoft currently has in the market, launching additional Office apps for Apple devices would be a logical extension of their existing strategy,” Forrester analyst Sarah Rotman Epps told Wired in an e-mail. Microsoft already has Mac and iOS products like Office for Mac, a note-taking app called OneNote, SkyDrive for cloud storage, and Lync, points out Rotman Epps.
Rumors that Microsoft would be bringing Office to the iPad have been circulating for a while, particularly since The Daily reported in late November that the suite would arrive in early 2012 at a $10 price point.
If what The Daily reported Tuesday is true, it’s possible that Microsoft Office for iPad could land concurrent to — or even onstage with — Apple’s first public iPad 3 demo, which is expected to be held the first week of March. It would certainly make for an interesting presentation, as Apple doesn’t actively evangelize its Microsoft synergy. Microsoft will be demoing its Windows 8 OS consumer preview on Feb. 29 so the timing of an early March Office for iPad unveiling would seem to work: Microsoft’s big platform-wide announcement wouldn’t be upstaged by its smaller Apple announcement.
So let’s assume Office is coming to the iPad. How precisely will you use it?
“You’ll use it for content curation. And it’s very unlikely you’ll be using the iPad in native tablet touch mode,” Sachin Dev Duggal, CEO of Nivio, told Wired. Nivio is a cloud platform that lets you access your desktop and its files — including Windows and Microsoft Office — with a touch-controlled mouse pointer as input. “In most cases, you’ll have it docked into a screen or a keyboard,” Dev Duggal said of the rumored Office app.
However, a second use case — passively browsing through documents — definitely lends itself to the iPad’s simple touch-controlled data input. And don’t underestimate the value of full document support. By loading native Office docs directly into Office, you ensure files render with proper formatting, a talent not always manifest in competitors like Documents to Go Premium. In this case, “The pure gesture-based control works great,” Dev Duggal said. “It translates to a tablet experience.”
OK, so Dev Duggal paints an interesting picture of how the app will be used, but, again, is there a desperate need for Office on the iPad? Many of us have been getting by just fine without it. Well, according to Resolve Market Research, 18 percent of those who decided not to purchase an iPad 2 did so strictly because it didn’t come with Microsoft Office programs. That’s not a number to balk at.
Dev Duggal thinks students and small businesses will be interested in Office for iPad. And there’s also another prime user group: people who don’t want to spend money on multiple devices. “If they can cross-utilize devices to also do productivity, thats a huge cost savings,” Dev Duggal said.
Elaine Coleman of Resolve Market Research concurs with Dev Duggal. “Tablets are a critical dual-purpose device,” Coleman told Wired, adding that close to 70 percent of personal tablet users also use their devices for business.
Indeed, the iPad has a growing role in the world of enterprise computing, with a large percent of Fortune 500 companies adopting the tablet (this was a touch point in Apple CEO Tim Cook’s recent first-quarter earnings call). So, no doubt, the addition of Microsoft Office to the enterprise mix would be welcome.
But Microsoft has waited a long time to deliver this product — perhaps too long.
“Every day that Microsoft does not have Office apps for iPad, they lose potential sales to competitors,” Rotman Epps said. Such competitors include: Apple’s own iWork office suite; Quickoffice, an iPhone alternative for viewing, sharing and editing Microsoft Office documents; and SlideShark, an iPad-based PowerPoint platform.
Rotman Epps pointed out that these and a host of other productivity apps are all top performers in Apple’s App Store. Indeed, Apple’s Pages, Keynote and Numbers (in other words, the iWork suite) make up three of the top five spots in the Top Charts for paid Productivity apps in the App Store. And with OS X Mountain Lion’s heavy iCloud integration, using Apple’s iWork suite will make even more sense for users who own multiple Apple products.
Whether people who already use Office alternatives would switch to Microsoft-brand products is “hard to say for sure,” says Coleman. “I think in the enterprise many still believe ‘Office is King’ and will come back.”
Regardless, if Microsoft Office for iPad did make its debut onstage for the iPad 3 in a few weeks, it would be the first time the two tech giants teamed up at an Apple event in 15 years. Considering what happened last time, it would be a landmark occasion. For both companies.
Yesterday, a report by staff of The Daily claimed that Microsoft Office for iPad apps are definitely in the works, and could be released “in the coming weeks.” The story included photos and descriptions of a purported hands-on demo. Microsoft representatives were quick to shoot back both on Twitter and in an official statement stating The Daily had its facts wrong and that its reporters had not, in fact, seen an actual Microsoft product on the tablet.
Nonetheless, The Daily‘s Peter Ha later insisted that a working version of the app was demoed to the digital publication by a Microsoft employee. It’s a he-said-she-said situation, but at least one key industry watcher feels Office for iPad makes sense.
“I can say that based on the products Microsoft currently has in the market, launching additional Office apps for Apple devices would be a logical extension of their existing strategy,” Forrester analyst Sarah Rotman Epps told Wired in an e-mail. Microsoft already has Mac and iOS products like Office for Mac, a note-taking app called OneNote, SkyDrive for cloud storage, and Lync, points out Rotman Epps.
Rumors that Microsoft would be bringing Office to the iPad have been circulating for a while, particularly since The Daily reported in late November that the suite would arrive in early 2012 at a $10 price point.
If what The Daily reported Tuesday is true, it’s possible that Microsoft Office for iPad could land concurrent to — or even onstage with — Apple’s first public iPad 3 demo, which is expected to be held the first week of March. It would certainly make for an interesting presentation, as Apple doesn’t actively evangelize its Microsoft synergy. Microsoft will be demoing its Windows 8 OS consumer preview on Feb. 29 so the timing of an early March Office for iPad unveiling would seem to work: Microsoft’s big platform-wide announcement wouldn’t be upstaged by its smaller Apple announcement.
So let’s assume Office is coming to the iPad. How precisely will you use it?
“You’ll use it for content curation. And it’s very unlikely you’ll be using the iPad in native tablet touch mode,” Sachin Dev Duggal, CEO of Nivio, told Wired. Nivio is a cloud platform that lets you access your desktop and its files — including Windows and Microsoft Office — with a touch-controlled mouse pointer as input. “In most cases, you’ll have it docked into a screen or a keyboard,” Dev Duggal said of the rumored Office app.
However, a second use case — passively browsing through documents — definitely lends itself to the iPad’s simple touch-controlled data input. And don’t underestimate the value of full document support. By loading native Office docs directly into Office, you ensure files render with proper formatting, a talent not always manifest in competitors like Documents to Go Premium. In this case, “The pure gesture-based control works great,” Dev Duggal said. “It translates to a tablet experience.”
OK, so Dev Duggal paints an interesting picture of how the app will be used, but, again, is there a desperate need for Office on the iPad? Many of us have been getting by just fine without it. Well, according to Resolve Market Research, 18 percent of those who decided not to purchase an iPad 2 did so strictly because it didn’t come with Microsoft Office programs. That’s not a number to balk at.
Dev Duggal thinks students and small businesses will be interested in Office for iPad. And there’s also another prime user group: people who don’t want to spend money on multiple devices. “If they can cross-utilize devices to also do productivity, thats a huge cost savings,” Dev Duggal said.
Elaine Coleman of Resolve Market Research concurs with Dev Duggal. “Tablets are a critical dual-purpose device,” Coleman told Wired, adding that close to 70 percent of personal tablet users also use their devices for business.
Indeed, the iPad has a growing role in the world of enterprise computing, with a large percent of Fortune 500 companies adopting the tablet (this was a touch point in Apple CEO Tim Cook’s recent first-quarter earnings call). So, no doubt, the addition of Microsoft Office to the enterprise mix would be welcome.
But Microsoft has waited a long time to deliver this product — perhaps too long.
“Every day that Microsoft does not have Office apps for iPad, they lose potential sales to competitors,” Rotman Epps said. Such competitors include: Apple’s own iWork office suite; Quickoffice, an iPhone alternative for viewing, sharing and editing Microsoft Office documents; and SlideShark, an iPad-based PowerPoint platform.
Rotman Epps pointed out that these and a host of other productivity apps are all top performers in Apple’s App Store. Indeed, Apple’s Pages, Keynote and Numbers (in other words, the iWork suite) make up three of the top five spots in the Top Charts for paid Productivity apps in the App Store. And with OS X Mountain Lion’s heavy iCloud integration, using Apple’s iWork suite will make even more sense for users who own multiple Apple products.
Whether people who already use Office alternatives would switch to Microsoft-brand products is “hard to say for sure,” says Coleman. “I think in the enterprise many still believe ‘Office is King’ and will come back.”
Regardless, if Microsoft Office for iPad did make its debut onstage for the iPad 3 in a few weeks, it would be the first time the two tech giants teamed up at an Apple event in 15 years. Considering what happened last time, it would be a landmark occasion. For both companies.
Feb 25, 2012
Software Technology :: CloudOn Brings Microsoft Office to iPad
Probably the biggest thing stopping many users from switching to the
iPad full time is the lack of Microsoft Office on the tablet. It might
be a bloated, slow, convoluted mess that makes you want to toss your
computer out the window whenever you use it, but Office–and
particularly Word– are pretty much mandatory for many jobs.
Enter CloudOn, a combination of app and web service, which lets you create and edit Office documents using your iPad. It works by running Office-compatible software on the CloudOn servers, meaning you need to be online to use it. But as the server-session uses a native app as a front-end, you open mail attachments, say, with the usual “Open with” service.
CloudOn also pulls your documents in from Dropbox for editing, and sends them back when done. The free app has currently been pulled from the App Store due to overwhelming demand, but from screenshots you can see that the interface isn’t really very touch-friendly, and comes off more like a regular desktop app squeezed onto a small screen.
Still, if you really need to do some tracking of changes on the go, you might want to sign up to be notified when the app goes back in the store. I really think that the developers missed a trick by making this app free, especially as it uses some presumably expensive server-power. I’d pay $10 for it just to have it when I need it, and I hate MS Word with a passion.
Enter CloudOn, a combination of app and web service, which lets you create and edit Office documents using your iPad. It works by running Office-compatible software on the CloudOn servers, meaning you need to be online to use it. But as the server-session uses a native app as a front-end, you open mail attachments, say, with the usual “Open with” service.
CloudOn also pulls your documents in from Dropbox for editing, and sends them back when done. The free app has currently been pulled from the App Store due to overwhelming demand, but from screenshots you can see that the interface isn’t really very touch-friendly, and comes off more like a regular desktop app squeezed onto a small screen.
Still, if you really need to do some tracking of changes on the go, you might want to sign up to be notified when the app goes back in the store. I really think that the developers missed a trick by making this app free, especially as it uses some presumably expensive server-power. I’d pay $10 for it just to have it when I need it, and I hate MS Word with a passion.
Gadge Technology :: Could Microsoft Office Go Multi-Platform For Mobile?
Traditionally, Microsoft has been a software company, leveraging its
office suites and operating systems, but selling applications for any
compatible hardware and platform. For smartphones in particular, its
strategy has been to supply the software and let other companies worry
about developing the phones. So why not go all the way and sell its
software for every device on every platform?
That’s what Business Insider’s Dan Frommer proposes the company do: “Microsoft should develop Office apps for the iPad, Android, Chrome OS, BlackBerry tablet, and any other computing platform that is likely to become popular over the next 5-10 years,” adding that “if Microsoft wants to keep people tied into its Office suite, it needs to go where the people are going.”
Office is integrated into the forthcoming Windows Phone 7 OS, but would compete on several fronts in smartphone and tablet platforms, including iWork on Apple’s iPad, Google Docs on the mobile web, and Dataviz’s multi-platform Documents To Go, just acquired by Blackberry maker RIM.
Frommer sees RIM’s purchase of Documents To Go as a defense against the possibility of Microsoft introducing an Office app for Blackberry. Ironically, if RIM stops active development of Documents To Go for other platforms, that could create just the multi-platform opening needed to entice Microsoft to swoop in.
That’s what Business Insider’s Dan Frommer proposes the company do: “Microsoft should develop Office apps for the iPad, Android, Chrome OS, BlackBerry tablet, and any other computing platform that is likely to become popular over the next 5-10 years,” adding that “if Microsoft wants to keep people tied into its Office suite, it needs to go where the people are going.”
Office is integrated into the forthcoming Windows Phone 7 OS, but would compete on several fronts in smartphone and tablet platforms, including iWork on Apple’s iPad, Google Docs on the mobile web, and Dataviz’s multi-platform Documents To Go, just acquired by Blackberry maker RIM.
Frommer sees RIM’s purchase of Documents To Go as a defense against the possibility of Microsoft introducing an Office app for Blackberry. Ironically, if RIM stops active development of Documents To Go for other platforms, that could create just the multi-platform opening needed to entice Microsoft to swoop in.
Technology :: 'Fountain of youth' enzyme lengthens mouse life
FINALLY, a contender for the elusive fountain of youth: an enzyme found in humans appears to lengthen the life of mice.
Researchers hoping to slow the march of age were dealt a blow in 2010, when signs that an enzyme called sirtuin 2 extended the life of worms were shown to be false due to flawed experimental design.
Mammals
have seven types of sirtuin, so Haim Cohen and Yariv Kanfi at Bar-Ilan
University in Ramat Gan, Israel, turned to sirtuin 6 instead. They
compared mice genetically engineered to have increased levels of SIRT6
with normal mice, engineering the mice in two different ways to control
for genetic influences.
Male
mice from both strains lived 15 per cent longer than normal mice or
females. Older modified male mice metabolised sugar faster than normal
mice and females, suggesting that SIRT6 might extend life by protecting
against metabolic disorders such as diabetes.
Feb 24, 2012
Technology :: Now You Can Edit Collaboratively with Google Docs for Android
Just a few short weeks after giving users of Google Docs for Android offline access to their documents, Google on Wednesday announced another highly sought-after addition to the software.
Specifically, users of the word processing app can now collaborate with others on their documents, with updates appearing in real time as participants type on their computers, tablets, and phones. Users need only tap the document to join the collaboration.
“We want to give everyone the chance to be productive no matter where they are, so today we’re releasing a new update to the Google Docs app for Android,” wrote Google software engineer Vadim Gerasimov in a Wednesday blog post announcing the news. “We've brought the collaborative experience from Google Docs on the desktop to your Android device.”
Easier Editing on the Go
Along with the new collaboration capabilities in Google Docs for Android, Google has also updated the software's interface to make it easier to work with documents on the go.
Users can now pinch to zoom and focus on a specific paragraph, for example, or see the whole document at a glance.
“We also added rich text formatting so you can do things like create a quick bullet list, add color to your documents, or just bold something important,” Gerasimov explained.
The video below demonstrates the new Google Docs app in action.
Presentation Discussions
Also on Wednesday, Google followed up on an update to the Google Docs presentations Web application that it previewed last fall.
Not only is that enhanced preview now enabled for all new presentations, but Google has also brought the discussions feature already familiar to documents users over to the presentations side as well.
With discussions in presentations, users can now comment on a shape or an entire slide, for example, or send an email notification by adding someone to a comment, software engineer Michael Thomas explained in a blog post announcing the news.
Converting Existing Presentations
Users of the new feature can also resolve comments to let collaborators know that they’ve been addressed, and they can give others the ability to comment on a presentation without being able to edit it. The video below demonstrates the new discussions feature in action.
To convert existing presentations to the new version of the editor, users should create a new presentation and import their slides by selecting “Import slides” from the “File” menu, Thomas said. Further details are provided on the Google Docs support site.
Google is also hosting a Google+ Hangout at 2:30 pm EST on Thursday to discuss the new presentation updates.
Specifically, users of the word processing app can now collaborate with others on their documents, with updates appearing in real time as participants type on their computers, tablets, and phones. Users need only tap the document to join the collaboration.
“We want to give everyone the chance to be productive no matter where they are, so today we’re releasing a new update to the Google Docs app for Android,” wrote Google software engineer Vadim Gerasimov in a Wednesday blog post announcing the news. “We've brought the collaborative experience from Google Docs on the desktop to your Android device.”
Easier Editing on the Go
Along with the new collaboration capabilities in Google Docs for Android, Google has also updated the software's interface to make it easier to work with documents on the go.
Users can now pinch to zoom and focus on a specific paragraph, for example, or see the whole document at a glance.
“We also added rich text formatting so you can do things like create a quick bullet list, add color to your documents, or just bold something important,” Gerasimov explained.
The video below demonstrates the new Google Docs app in action.
Presentation Discussions
Also on Wednesday, Google followed up on an update to the Google Docs presentations Web application that it previewed last fall.
Not only is that enhanced preview now enabled for all new presentations, but Google has also brought the discussions feature already familiar to documents users over to the presentations side as well.
With discussions in presentations, users can now comment on a shape or an entire slide, for example, or send an email notification by adding someone to a comment, software engineer Michael Thomas explained in a blog post announcing the news.
Converting Existing Presentations
Users of the new feature can also resolve comments to let collaborators know that they’ve been addressed, and they can give others the ability to comment on a presentation without being able to edit it. The video below demonstrates the new discussions feature in action.
To convert existing presentations to the new version of the editor, users should create a new presentation and import their slides by selecting “Import slides” from the “File” menu, Thomas said. Further details are provided on the Google Docs support site.
Google is also hosting a Google+ Hangout at 2:30 pm EST on Thursday to discuss the new presentation updates.
Technology :: Nehalem and Swift Chips Spell the End of Stand-Alone Graphics Boards
When AMD purchased graphics card maker ATI, most industry observers
assumed that the combined company would start working on a CPU-GPU
fusion. That work is further along than you may think.
What is it?
While GPUs get tons of attention, discrete graphics boards are a comparative rarity among PC owners, as 75 percent of laptop users stick with good old integrated graphics, according to Mercury Research. Among the reasons: the extra cost of a discrete graphics card, the hassle of installing one, and its drain on the battery. Putting graphics functions right on the CPU eliminates all three issues.
Chip makers expect the performance of such on-die GPUs to fall somewhere between that of today's integrated graphics and stand-alone graphics boards--but eventually, experts believe, their performance could catch up and make discrete graphics obsolete. One potential idea is to devote, say, 4 cores in a 16-core CPU to graphics processing, which could make for blistering gaming experiences.
When is it coming?
Intel's soon-to-come Nehalem chip includes graphics processing within the chip package, but off of the actual CPU die. AMD's Swift (aka the Shrike platform), the first product in its Fusion line, reportedly takes the same design approach, and is also currently on tap for 2009.
Putting the GPU directly on the same die as the CPU presents challenges--heat being a major one--but that doesn't mean those issues won't be worked out. Intel's two Nehalem follow-ups, Auburndale and Havendale, both slated for late 2009, may be the first chips to put a GPU and a CPU on one die, but the company isn't saying yet.
USB 3.0 Speeds Up Performance on External Devices
The USB connector has been one of the greatest success stories in the history of computing, with more than 2 billion USB-connected devices sold to date. But in an age of terabyte hard drives, the once-cool throughput of 480 megabits per second that a USB 2.0 device can realistically provide just doesn't cut it any longer.
What is it?
USB 3.0 (aka "SuperSpeed USB") promises to increase performance by a factor of 10, pushing the theoretical maximum throughput of the connector all the way up to 4.8 gigabits per second, or processing roughly the equivalent of an entire CD-R disc every second. USB 3.0 devices will use a slightly different connector, but USB 3.0 ports are expected to be backward-compatible with current USB plugs, and vice versa. USB 3.0 should also greatly enhance the power efficiency of USB devices, while increasing the juice (nearly one full amp, up from 0.1 amps) available to them. That means faster charging times for your iPod--and probably even more bizarre USB-connected gear like the toy rocket launchers and beverage coolers that have been festooning people's desks.
When is it coming?
The USB 3.0 spec is nearly finished, with consumer gear now predicted to come in 2010. Meanwhile, a host of competing high-speed plugs--DisplayPort, eSATA, and HDMI--will soon become commonplace on PCs, driven largely by the onset of high-def video. Even FireWire is looking at an imminent upgrade of up to 3.2 gbps performance. The port proliferation may make for a baffling landscape on the back of a new PC, but you will at least have plenty of high-performance options for hooking up peripherals.
Wireless Power Transmission
Wireless power transmission has been a dream since the days when Nikola Tesla imagined a world studded with enormous Tesla coils. But aside from advances in recharging electric toothbrushes, wireless power has so far failed to make significant inroads into consumer-level gear.
What is it?
This summer, Intel researchers demonstrated a method--based on MIT research--for throwing electricity a distance of a few feet, without wires and without any dangers to bystanders (well, none that they know about yet). Intel calls the technology a "wireless resonant energy link," and it works by sending a specific, 10-MHz signal through a coil of wire; a similar, nearby coil of wire resonates in tune with the frequency, causing electrons to flow through that coil too. Though the design is primitive, it can light up a 60-watt bulb with 70 percent efficiency.
When is it coming?
Numerous obstacles remain, the first of which is that the Intel project uses alternating current. To charge gadgets, we'd have to see a direct-current version, and the size of the apparatus would have to be considerably smaller. Numerous regulatory hurdles would likely have to be cleared in commercializing such a system, and it would have to be thoroughly vetted for safety concerns.
Assuming those all go reasonably well, such receiving circuitry could be integrated into the back of your laptop screen in roughly the next six to eight years. It would then be a simple matter for your local airport or even Starbucks to embed the companion power transmitters right into the walls so you can get a quick charge without ever opening up your laptop bag.
What is it?
While GPUs get tons of attention, discrete graphics boards are a comparative rarity among PC owners, as 75 percent of laptop users stick with good old integrated graphics, according to Mercury Research. Among the reasons: the extra cost of a discrete graphics card, the hassle of installing one, and its drain on the battery. Putting graphics functions right on the CPU eliminates all three issues.
Chip makers expect the performance of such on-die GPUs to fall somewhere between that of today's integrated graphics and stand-alone graphics boards--but eventually, experts believe, their performance could catch up and make discrete graphics obsolete. One potential idea is to devote, say, 4 cores in a 16-core CPU to graphics processing, which could make for blistering gaming experiences.
When is it coming?
Intel's soon-to-come Nehalem chip includes graphics processing within the chip package, but off of the actual CPU die. AMD's Swift (aka the Shrike platform), the first product in its Fusion line, reportedly takes the same design approach, and is also currently on tap for 2009.
Putting the GPU directly on the same die as the CPU presents challenges--heat being a major one--but that doesn't mean those issues won't be worked out. Intel's two Nehalem follow-ups, Auburndale and Havendale, both slated for late 2009, may be the first chips to put a GPU and a CPU on one die, but the company isn't saying yet.
USB 3.0 Speeds Up Performance on External Devices
The USB connector has been one of the greatest success stories in the history of computing, with more than 2 billion USB-connected devices sold to date. But in an age of terabyte hard drives, the once-cool throughput of 480 megabits per second that a USB 2.0 device can realistically provide just doesn't cut it any longer.
What is it?
USB 3.0 (aka "SuperSpeed USB") promises to increase performance by a factor of 10, pushing the theoretical maximum throughput of the connector all the way up to 4.8 gigabits per second, or processing roughly the equivalent of an entire CD-R disc every second. USB 3.0 devices will use a slightly different connector, but USB 3.0 ports are expected to be backward-compatible with current USB plugs, and vice versa. USB 3.0 should also greatly enhance the power efficiency of USB devices, while increasing the juice (nearly one full amp, up from 0.1 amps) available to them. That means faster charging times for your iPod--and probably even more bizarre USB-connected gear like the toy rocket launchers and beverage coolers that have been festooning people's desks.
When is it coming?
The USB 3.0 spec is nearly finished, with consumer gear now predicted to come in 2010. Meanwhile, a host of competing high-speed plugs--DisplayPort, eSATA, and HDMI--will soon become commonplace on PCs, driven largely by the onset of high-def video. Even FireWire is looking at an imminent upgrade of up to 3.2 gbps performance. The port proliferation may make for a baffling landscape on the back of a new PC, but you will at least have plenty of high-performance options for hooking up peripherals.
Wireless Power Transmission
Wireless power transmission has been a dream since the days when Nikola Tesla imagined a world studded with enormous Tesla coils. But aside from advances in recharging electric toothbrushes, wireless power has so far failed to make significant inroads into consumer-level gear.
What is it?
This summer, Intel researchers demonstrated a method--based on MIT research--for throwing electricity a distance of a few feet, without wires and without any dangers to bystanders (well, none that they know about yet). Intel calls the technology a "wireless resonant energy link," and it works by sending a specific, 10-MHz signal through a coil of wire; a similar, nearby coil of wire resonates in tune with the frequency, causing electrons to flow through that coil too. Though the design is primitive, it can light up a 60-watt bulb with 70 percent efficiency.
When is it coming?
Numerous obstacles remain, the first of which is that the Intel project uses alternating current. To charge gadgets, we'd have to see a direct-current version, and the size of the apparatus would have to be considerably smaller. Numerous regulatory hurdles would likely have to be cleared in commercializing such a system, and it would have to be thoroughly vetted for safety concerns.
Assuming those all go reasonably well, such receiving circuitry could be integrated into the back of your laptop screen in roughly the next six to eight years. It would then be a simple matter for your local airport or even Starbucks to embed the companion power transmitters right into the walls so you can get a quick charge without ever opening up your laptop bag.
Search Technology :: Why Google Digital Glasses Are a Prescription for Disaster
We’ve seen accident after accident of people texting, gaming, or web surfing while walking. The U.S. government is now considering banning all automobile phone calls, including hands free.
Now, Google reportedly will have digital glasses on sale by the end of the year, according to the New York Times. Doesn't it seem like a bad time to develop digital displays in front of our eyes?
That’s right: Android-based lenses overlaying your world with information like maps and data. We all knew this augmented reality product was eventually coming, but it is now looking literally like a disaster (or more) waiting to happen.
Watch out for that tree!
Early reports say that the glasses aren’t designed for everyday wear, but that’s akin to saying that smartphones aren’t meant to be carried all day or home computers were only for certain tasks--before we carried our smartphones all day and used home computers for most tasks. I doubt that Apple planned on people texting while walking, either.
Glasses are actually the final piece to Google’s mission: To know what a user doing every single moment of the day. The search giant already is unifying some 60-odd products into one log-in for continuous online tracking. And, as we reported last week, it’s enticing you to use Google to come up with those web passwords.
Yeah, the digital glasses will be pretty strange and, at worse, pretty dangerous. Here are the very likely problems with Google’s ambitious product.
Google’s power, however, comes from knowing everything you do online. By wearing Android-powered glasses, you’re giving Google unprecedented access to:
Worse, imagine being the boy or girl at school with computerized lenses. Aren’t four-eyed school kids getting harassed enough without wearing Lt. Commander Geordi La Forge eyewear?
Both Google Coupons and Google Latitude aren’t making much of a dent in the competition, but having tracking data on users 24/7 would be a huge coup for both services. It would be automatic check-ins, pushed suggestions, and coupons. Lots and lots of coupons.
It’s easy to picture a major food chain, like McDonald’s-owned Chipotle, paying for a top spot on your eyeglasses, kind of like an ad on Google search.
The message could offer you a coupon every time a Chipotle restaurant is within a mile radius (which, at least in my neighborhood, is often). The Times estimates that the glasses will be priced like high-end smartphones, so you can bet that cheaper, subsidized goggles will come along for those willing to see tons of ads throughout each and every day.
Now, Google reportedly will have digital glasses on sale by the end of the year, according to the New York Times. Doesn't it seem like a bad time to develop digital displays in front of our eyes?
That’s right: Android-based lenses overlaying your world with information like maps and data. We all knew this augmented reality product was eventually coming, but it is now looking literally like a disaster (or more) waiting to happen.
How They Work
We’ll apparently pay between $250 and $600 for glasses with one computerized lens, PCWorld’s Daniel Ionescu noted earlier Wednesday. The lens will be a contextual heads-up display that can tell you, for instance, how far you are from your destination. They aren't designed for continuous wear, however.
Like Android phones, these goggles will be licensed to third-party companies and will use a 3G or 4G connection to download data. And how will you control the menus? By nodding and bobbing your head.Watch out for that tree!
Early reports say that the glasses aren’t designed for everyday wear, but that’s akin to saying that smartphones aren’t meant to be carried all day or home computers were only for certain tasks--before we carried our smartphones all day and used home computers for most tasks. I doubt that Apple planned on people texting while walking, either.
Glasses are actually the final piece to Google’s mission: To know what a user doing every single moment of the day. The search giant already is unifying some 60-odd products into one log-in for continuous online tracking. And, as we reported last week, it’s enticing you to use Google to come up with those web passwords.
Yeah, the digital glasses will be pretty strange and, at worse, pretty dangerous. Here are the very likely problems with Google’s ambitious product.
Google’s power, however, comes from knowing everything you do online. By wearing Android-powered glasses, you’re giving Google unprecedented access to:
- Your location at all times
- Your most common interactions
- Your closest companions through facial recognition
- Your eating, shopping, and traveling habits
Grand Theft Sunglasses
As some commenters have pointed out, glasses theft could definitely rise once these expensive specs come out. If so, that might parallel the high number of iPod and iPhone thefts that occurred when those technologies first arrived.Worse, imagine being the boy or girl at school with computerized lenses. Aren’t four-eyed school kids getting harassed enough without wearing Lt. Commander Geordi La Forge eyewear?
Coupons 24 hours a day--Now in Your Eyes!
It’s all about location, location, location, and Google's goggles will have a direct bead on you 24 hours a day. On one end are Groupon, Living Social, and other mass-coupon services, and on the other are FourSquare, Gowalla, and other check-in companies. Google jumped into the middle of the fight last year with its Google Coupons app and, more recently, with Google Latitude check-ins.Both Google Coupons and Google Latitude aren’t making much of a dent in the competition, but having tracking data on users 24/7 would be a huge coup for both services. It would be automatic check-ins, pushed suggestions, and coupons. Lots and lots of coupons.
It’s easy to picture a major food chain, like McDonald’s-owned Chipotle, paying for a top spot on your eyeglasses, kind of like an ad on Google search.
The message could offer you a coupon every time a Chipotle restaurant is within a mile radius (which, at least in my neighborhood, is often). The Times estimates that the glasses will be priced like high-end smartphones, so you can bet that cheaper, subsidized goggles will come along for those willing to see tons of ads throughout each and every day.
- See more like this:
- google,
- future technology
Feb 23, 2012
Technology :: Will the FTC Investigate Google's Safari Gaffe?
Privacy advocates and now some members of Congress say Google should answer for its practice of bypassing the default privacy settings of potentially millions of users of Apple's Safari browser.
Three members of the U.S. House of Representatives are asking the Federal Trade Commission to investigate Google's Safari workaround. The Electronic Privacy Information Center is going further, asking [PDF] the FTC to find that Google violated its recent settlement with the federal agency regarding its Buzz privacy practices. Google, meanwhile, says it was merely using "known functionality" in Safari and any resulting privacy violations were just a mishap the company "didn't anticipate."
To get around this issue, Google inserted an invisible web form into its advertising if a user clicked on the company's +1 buttons embedded in Google advertising. Safari would then think the user interacted with the invisible form and allow the browser to accept further cookies.
This workaround also enabled Google to track users across the web even though their privacy settings said they didn't want to be tracked. Google responded to the accusations by saying it was only providing features that signed-in Google users had enabled using "known functionality" in Safari's web browser. But, the company said, it didn't anticipate that Safari's "known functionality" would have the side effect of allowing other tracking cookies to be set as well, such as cookies from its advertising service, DoubleClick.
So should the FTC chalk this up to a big misunderstanding and a mistake, or investigate Google's potential misbehavior? Regardless, of Google's motives, I think the FTC should investigate and here's why.
Whetstone argues that Google was only enabling "known functionality" in Safari to carry out the wishes of signed-in Google users. But was this the best plan? Instead of using this workaround couldn't Google have used a browser pop-up or a web page redirect to alert users they needed to change their cookie settings to enable this kind of activity? Instead, the company chose to use an invisible method beyond the control of the user.
When privacy concerns were raised over Google's failed social networking platform, Buzz, in February 2010, the company responded, "We quickly realized that we didn't get everything quite right. We're very sorry for the concern we've caused." Google then promised to do better.
A few months later, in May, Google was caught collecting user data from unencrypted Wi-Fi networks as it used its Street View cars to create a worldwide database of Wi-Fi routers to help improve the company's mobile location services. "We have been mistakenly collecting samples of payload data from open (i.e. non-password-protected) WiFi networks, even though we never used that data in any Google products," Google said.
More recently, in January, Google was accused of trying to weasel money out of small business owners in Kenya, Africa by falsely claiming that it was in a joint venture with Mocality, a Kenya-based crowd sourced business directory. And what was Google's response this time? "We were mortified to learn that a team of people working on a Google project improperly used Mocality’s data and misrepresented our relationship with Mocality," said Nelson Mattos Google's vice president for product and engineering in Europe and emerging markets. "We’re still investigating exactly how this happened, and as soon as we have all the facts, we’ll be taking the appropriate action with the people involved." Oops, we didn't know -- again.
Four serious gaffes and each time Google said it didn't realize what it was doing. That may in fact be true in each case, but does oversight excuse the error? How many times can Google say, "Oops, we goofed, we didn't know" before the company is held to account for its self-inflicted stupidity? Accident or not, Google should be investigated for its bad behavior and held accountable for its actions.
Three members of the U.S. House of Representatives are asking the Federal Trade Commission to investigate Google's Safari workaround. The Electronic Privacy Information Center is going further, asking [PDF] the FTC to find that Google violated its recent settlement with the federal agency regarding its Buzz privacy practices. Google, meanwhile, says it was merely using "known functionality" in Safari and any resulting privacy violations were just a mishap the company "didn't anticipate."
Goofari
The Wall Street Journal recently reported that Google was bypassing the default privacy settings in Apple's Safari for both desktop and mobile devices. Google's privacy violations potentially include users of iPhone, iPod Touch, iPad, and Mac OS X devices, as well as Safari for Windows users. Safari's defaults prohibit third parties such as advertising and web analytics firms from setting tracking cookies without user authorization. This presented a problem for Google, since the company wanted to identify when users were signed in to their Google accounts in order to deliver pernalized advertising and the ability to +1 (similar to a Facebook like) items online.To get around this issue, Google inserted an invisible web form into its advertising if a user clicked on the company's +1 buttons embedded in Google advertising. Safari would then think the user interacted with the invisible form and allow the browser to accept further cookies.
This workaround also enabled Google to track users across the web even though their privacy settings said they didn't want to be tracked. Google responded to the accusations by saying it was only providing features that signed-in Google users had enabled using "known functionality" in Safari's web browser. But, the company said, it didn't anticipate that Safari's "known functionality" would have the side effect of allowing other tracking cookies to be set as well, such as cookies from its advertising service, DoubleClick.
So should the FTC chalk this up to a big misunderstanding and a mistake, or investigate Google's potential misbehavior? Regardless, of Google's motives, I think the FTC should investigate and here's why.
Broke the Rules
"We used known Safari functionality to provide features that signed-in Google users had enabled," says Rachel Whetstone, Google's senior vice president of communications and public policy in response to the Journal's report. "Unlike other major browsers, Apple’s Safari browser blocks third-party cookies by default. However, Safari enables many web features for its users that rely on third parties and third-party cookies...Last year, we began using this functionality to enable features for signed-in Google users on Safari."Whetstone argues that Google was only enabling "known functionality" in Safari to carry out the wishes of signed-in Google users. But was this the best plan? Instead of using this workaround couldn't Google have used a browser pop-up or a web page redirect to alert users they needed to change their cookie settings to enable this kind of activity? Instead, the company chose to use an invisible method beyond the control of the user.
Popularity
Thanks to the popularity of Apple's Safari browser on iOS, the result of Google's workaround is that the privacy of perhaps millions of users was violated. Apple's Safari currently accounts for 55 percent of all smartphone and tablet browsing activity worldwide, according to metrics firm Netmarketshare.Same old Song and Dance
Every time Google is found to be up to no good, the company uses virtually the same excuse: "Oops, sorry, that was a mistake, we didn't know we were doing that." This time around it was Whetstone saying that Google "didn't anticipate" its Safari workaround would allow it to set tracking cookies the user hadn't explicitly authorized.When privacy concerns were raised over Google's failed social networking platform, Buzz, in February 2010, the company responded, "We quickly realized that we didn't get everything quite right. We're very sorry for the concern we've caused." Google then promised to do better.
A few months later, in May, Google was caught collecting user data from unencrypted Wi-Fi networks as it used its Street View cars to create a worldwide database of Wi-Fi routers to help improve the company's mobile location services. "We have been mistakenly collecting samples of payload data from open (i.e. non-password-protected) WiFi networks, even though we never used that data in any Google products," Google said.
More recently, in January, Google was accused of trying to weasel money out of small business owners in Kenya, Africa by falsely claiming that it was in a joint venture with Mocality, a Kenya-based crowd sourced business directory. And what was Google's response this time? "We were mortified to learn that a team of people working on a Google project improperly used Mocality’s data and misrepresented our relationship with Mocality," said Nelson Mattos Google's vice president for product and engineering in Europe and emerging markets. "We’re still investigating exactly how this happened, and as soon as we have all the facts, we’ll be taking the appropriate action with the people involved." Oops, we didn't know -- again.
Four serious gaffes and each time Google said it didn't realize what it was doing. That may in fact be true in each case, but does oversight excuse the error? How many times can Google say, "Oops, we goofed, we didn't know" before the company is held to account for its self-inflicted stupidity? Accident or not, Google should be investigated for its bad behavior and held accountable for its actions.
Technology :: The Future of Mobile Phones
Use Any Phone on Any Wireless Network
The reason most cell phones are so cheap is that wireless carriers subsidize them so you'll sign a long-term contract. Open access could change the economics of the mobile phone (and mobile data) business dramatically as the walls preventing certain devices from working on certain networks come down. We could also see a rapid proliferation of cell phone models, with smaller companies becoming better able to make headway into formerly closed phone markets.
What is it?
Two years is an eternity in the cellular world. The original iPhone was announced, introduced, and discontinued in less than that time, yet carriers routinely ask you to sign up for two-year contracts if you want access to their discounted phones. (It could be worse--in other countries, three years is normal.) Verizon launched the first volley late last year when it promised that "any device, any application" would soon be allowed on its famously closed network. Meanwhile, AT&T and T-Mobile like to note that their GSM networks have long been "open."
When is it coming?
Open access is partially here: You can use almost any unlocked GSM handset on AT&T or T-Mobile today, and Verizon Wireless began certifying third-party devices for its network in July (though to date the company has approved only two products). But the future isn't quite so rosy, as Verizon is dragging its feet a bit on the legal requirement that it keep its newly acquired 700-MHz network open to other devices, a mandate that the FCC agreed to after substantial lobbying by Google. Some experts have argued that the FCC provisions aren't wholly enforceable. However, we won't really know how "open" is defined until the new network begins rolling out, a debut slated for 2010.
Your Fingers Do Even More Walking
Last year Microsoft introduced Surface, a table with a built-in monitor and touch screen; many industry watchers have seen it as a bellwether for touch-sensitive computing embedded into every device imaginable. Surface is a neat trick, but the reality of touch devices may be driven by something entirely different and more accessible: the Apple iPhone.
What is it?
With the iPhone, "multitouch" technology (which lets you use more than one finger to perform specific actions) reinvented what we knew about the humble touchpad. Tracing a single finger on most touchpads looks positively simian next to some of the tricks you can do with two or more digits. Since the iPhone's launch, multitouch has found its way into numerous mainstream devices, including the Asus Eee PC 900 and a Dell Latitude tablet PC. Now all eyes are turned back to Apple, to see how it will further adapt multitouch (which it has already brought to its laptops' touchpads). Patents that Apple has filed for a multitouch tablet PC have many people expecting the company to dive into this neglected market, finally bringing tablets into the mainstream and possibly sparking explosive growth in the category.
When is it coming?
It's not a question of when Multitouch will arrive, but how quickly the trend will grow. Fewer than 200,000 touch-screen devices were shipped in 2006. iSuppli analysts have estimated that a whopping 833 million will be sold in 2013. The real guessing game is figuring out when the old "single-touch" pads become obsolete, possibly taking physical keyboards along with them in many devices.
Cell Phones Are the New Paper
What is it?
The idea of the paperless office has been with us since Bill Gates was in short pants, but no matter how sophisticated your OS or your use of digital files in lieu of printouts might be, they're of no help once you leave your desk. People need printouts of maps, receipts, and instructions when a computer just isn't convenient. PDAs failed to fill that need, so coming to the rescue are their replacements: cell phones.
Applications to eliminate the need for a printout in nearly any situation are flooding the market. Cellfire offers mobile coupons you can pull up on your phone and show to a clerk; Tickets.com now makes digital concert passes available via cell phone through its Tickets@Phone service. The final frontier, though, remains the airline boarding pass, which has resisted this next paperless step since the advent of Web-based check-in.
When is it coming?
Some cell-phone apps that replace paper are here now (just look at the ones for the iPhone), and even paperless boarding passes are creeping forward. Continental has been experimenting with a cell-phone check-in system that lets you show an encrypted, 2D bar code on your phone to a TSA agent in lieu of a paper boarding pass. The agent scans the bar code with an ordinary scanner, and you're on your way. Introduced at the Houston Intercontinental Airport, the pilot project became permanent earlier this year, and Continental rolled it out in three other airports in 2008. The company promises more airports to come. (Quantas will be doing something similar early next year.)
Where You At? Ask Your Phone, Not Your Friend
What is it?
LBS was originally envisioned as simply using old-school cell-phone signal triangulation to locate users' whereabouts, but as the chips become more common and more sophisticated, GPS is proving to be not only handy and accurate but also the basis for new services. Many startups have formed around location-based services. Want a date? Never mind who's compatible; who's nearby? MeetMoi can find them. Need to get a dozen people all in one place? Both Whrrl and uLocate's Buddy Beacon tell you where your friends are in real time.
Of course, not everyone is thrilled about LBS: Worries about surreptitious tracking or stalking are commonplace, as is the possibility of a flood of spam messages being delivered to your phone.
When is it coming?
LBS is growing fast. The only thing holding it back is the slow uptake of GPS-enabled phones (and carriers' steep fees to activate the function). But with iPhones selling like Ben & Jerry's in July, that's not much of a hurdle to overcome. Expect to see massive adoption of these technologies in 2009 and 2010.
The reason most cell phones are so cheap is that wireless carriers subsidize them so you'll sign a long-term contract. Open access could change the economics of the mobile phone (and mobile data) business dramatically as the walls preventing certain devices from working on certain networks come down. We could also see a rapid proliferation of cell phone models, with smaller companies becoming better able to make headway into formerly closed phone markets.
What is it?
Two years is an eternity in the cellular world. The original iPhone was announced, introduced, and discontinued in less than that time, yet carriers routinely ask you to sign up for two-year contracts if you want access to their discounted phones. (It could be worse--in other countries, three years is normal.) Verizon launched the first volley late last year when it promised that "any device, any application" would soon be allowed on its famously closed network. Meanwhile, AT&T and T-Mobile like to note that their GSM networks have long been "open."
When is it coming?
Open access is partially here: You can use almost any unlocked GSM handset on AT&T or T-Mobile today, and Verizon Wireless began certifying third-party devices for its network in July (though to date the company has approved only two products). But the future isn't quite so rosy, as Verizon is dragging its feet a bit on the legal requirement that it keep its newly acquired 700-MHz network open to other devices, a mandate that the FCC agreed to after substantial lobbying by Google. Some experts have argued that the FCC provisions aren't wholly enforceable. However, we won't really know how "open" is defined until the new network begins rolling out, a debut slated for 2010.
Your Fingers Do Even More Walking
Last year Microsoft introduced Surface, a table with a built-in monitor and touch screen; many industry watchers have seen it as a bellwether for touch-sensitive computing embedded into every device imaginable. Surface is a neat trick, but the reality of touch devices may be driven by something entirely different and more accessible: the Apple iPhone.
What is it?
With the iPhone, "multitouch" technology (which lets you use more than one finger to perform specific actions) reinvented what we knew about the humble touchpad. Tracing a single finger on most touchpads looks positively simian next to some of the tricks you can do with two or more digits. Since the iPhone's launch, multitouch has found its way into numerous mainstream devices, including the Asus Eee PC 900 and a Dell Latitude tablet PC. Now all eyes are turned back to Apple, to see how it will further adapt multitouch (which it has already brought to its laptops' touchpads). Patents that Apple has filed for a multitouch tablet PC have many people expecting the company to dive into this neglected market, finally bringing tablets into the mainstream and possibly sparking explosive growth in the category.
When is it coming?
It's not a question of when Multitouch will arrive, but how quickly the trend will grow. Fewer than 200,000 touch-screen devices were shipped in 2006. iSuppli analysts have estimated that a whopping 833 million will be sold in 2013. The real guessing game is figuring out when the old "single-touch" pads become obsolete, possibly taking physical keyboards along with them in many devices.
Cell Phones Are the New Paper
What is it?
The idea of the paperless office has been with us since Bill Gates was in short pants, but no matter how sophisticated your OS or your use of digital files in lieu of printouts might be, they're of no help once you leave your desk. People need printouts of maps, receipts, and instructions when a computer just isn't convenient. PDAs failed to fill that need, so coming to the rescue are their replacements: cell phones.
Applications to eliminate the need for a printout in nearly any situation are flooding the market. Cellfire offers mobile coupons you can pull up on your phone and show to a clerk; Tickets.com now makes digital concert passes available via cell phone through its Tickets@Phone service. The final frontier, though, remains the airline boarding pass, which has resisted this next paperless step since the advent of Web-based check-in.
When is it coming?
Some cell-phone apps that replace paper are here now (just look at the ones for the iPhone), and even paperless boarding passes are creeping forward. Continental has been experimenting with a cell-phone check-in system that lets you show an encrypted, 2D bar code on your phone to a TSA agent in lieu of a paper boarding pass. The agent scans the bar code with an ordinary scanner, and you're on your way. Introduced at the Houston Intercontinental Airport, the pilot project became permanent earlier this year, and Continental rolled it out in three other airports in 2008. The company promises more airports to come. (Quantas will be doing something similar early next year.)
Where You At? Ask Your Phone, Not Your Friend
What is it?
LBS was originally envisioned as simply using old-school cell-phone signal triangulation to locate users' whereabouts, but as the chips become more common and more sophisticated, GPS is proving to be not only handy and accurate but also the basis for new services. Many startups have formed around location-based services. Want a date? Never mind who's compatible; who's nearby? MeetMoi can find them. Need to get a dozen people all in one place? Both Whrrl and uLocate's Buddy Beacon tell you where your friends are in real time.
Of course, not everyone is thrilled about LBS: Worries about surreptitious tracking or stalking are commonplace, as is the possibility of a flood of spam messages being delivered to your phone.
When is it coming?
LBS is growing fast. The only thing holding it back is the slow uptake of GPS-enabled phones (and carriers' steep fees to activate the function). But with iPhones selling like Ben & Jerry's in July, that's not much of a hurdle to overcome. Expect to see massive adoption of these technologies in 2009 and 2010.
Technology News :: What's Hot In.... Technology
The
year ahead “will be transformative in relation to how new technologies
will impact our lives,” says Luca Penati, managing director of the
global technology practice at Ogilvy Public Relations. “We will see the
rise of the role of consumers in the enterprise, since we all want to
use our smartphone or tablet at work as we use it at home, with apps
that make our job easier and more fun.”
Jim Hawker, a principal at UK consultancy Threepipe, best known for its work in the consumer space, agrees. “Never before has technology been such a driver of innovation with consumer marketing,” he says. “The brands that are standing out and generating great results are those that are technology aware and incorporating new techniques to provide deeper engagement with consumers.
“The rise of smartphones and other mobile device adoption means we are all carrying around serious bits of kit that offer marketers a wonderful platform on which to engage with us both online and in the real world. That is such a powerful opportunity that is there for the taking and consumer and retail brands are moving fast to take advantage.”
Ubiquity and Convergence
According to Scott Friedman, regional director of Text 100 North America, “2012 is the year that technology moves from being an industry or a sector to being a core part of every business regardless of discipline. Twenty years ago technology was considered niche; 10 years ago it was considered one of the fastest growing industries. Today it’s everything.”
Global spending on consumer technology devices in 2012 is expected to top $1 trillion for the first time. “That’s not so surprising in a world where there are now more connected devices than people,” says Esty Pujadas, director of Ketchum’s global technology practice, who points to a variety of devices that will continue to proliferate in 2012: “The iPad 3, LTE smartphones, Internet-connected TVs, cars with augmented-reality windshields, fridges that know what’s inside.”
Pujadas predicts three trends: co-creation (“collaboration between tech companies and experts in seemingly unrelated fields, or customers, or major players in other industries”); appification (“how we now experience much of life through the mobile apps we download and how that shapes our expectations and behaviors”); and integration (“consumers want either all-in-one devices or at least a way to combine their personal data, and marketers want the insights that come from analyzing all that resulting Big Data”).
“These trends were front and center at the recent Consumer Electronics Show,” she says, “especially if you consider how many non-tech companies like Mercedes-Benz, Ford and Craftsman invested in having a big presence there. Technology will not only be a growth sector in 2012, it is also morphing and overlapping with other industry sectors at record speed.”
Heidi Sinclair, global technology practice chair at Weber Shandwick, is another who see the breakdown of traditional barriers between technology and other practice areas.
Sinclair says that the “consumerization” of IT “is forcing even non-consumer companies to market to consumers thus driving the need for consumer technology communications expertise.” At the same time, “we are squarely in The Age of Innovation where every company has an innovation story to tell. Technology communications know-how is being applied to everything from telling the science story in shampoo to developer communications programs geared to driving app development for automobiles.”
As a result Pujadas sees an advantage for full-service firms: “This is a real opportunity for agencies with multiple competencies: companies will need help going far beyond product marketing and communications– they’ll need help with corporate reputation, public affairs, issues and crisis—especially around data privacy and security—influencer strategies, audience targeting, social media and growth in emerging markets.”
Conversely, Text 100’s Friedman believes the ubiquity of technology and its convergence with other sectors and disciplines, presents an opportunity for tech specialists to expand.
“There isn't a single industry that isn't using technology to innovate and build competitive advantage,” he says. “And given that technology has been at the core of all that we do at Text 100, we're now seeing our expertise translate into the media, digital, automotive, travel and health sectors. Our 'technology' expertise is being sought in sectors we never thought possible.”
Content
The proliferation of high-tech devices has implications for technology marketers and for everyone engaged in communications—a topic we will cover at greater length in our look at what’s hot in digital and social media.
But if there’s one area of activity that’s particularly important for technology clients it’s the creation of original content that can be delivered across all of these platforms.
“We’re also seeing a surge in requests from our clients in all sectors—but in particular in technology—for content that can be used across platforms,” says Friedman. “This move to ‘branded journalism’ and multi-platform content shows the shift in the way technology companies are integrating their social, traditional, marketing and communications platforms to achieve most relevance with their audiences.”
International Markets
And while there are those who question the potential for growth in the US—Anne Green of New York-based CooperKatz & Company says that “over recent years our industry has benefited from growth in tech start-ups, but it remains to be seen whether that rate of growth is sustainable, particularly in key verticals that have seen a massive influx of new companies and competition”—there’s no doubt that there’s plenty of growth left in international markets.
The technology sector presents a significant opportunity in the Indian market, according to Varghese Cherian, who leads Edelman’s technology practice there. He estimates the 50 percent of the top PR spenders in India are technology companies.
That’s partly because technology companies are “early adopters of being in the forefront of accepting and trying out newer trends makes them the largest spenders” and also because “technology companies have been exposed to global trends more than other companies and hence they are part of an evolved cycle in using modern age communications tools and a strategic approach.”
The other BRIC markets—and China in particular—are also likely to see increased activity.
Coming Clean
Finally, when it comes to hot sectors in the technology arena, cleantech continues to generate the greatest interest. Sinclair says green technology “has moved from being in start-up mode to a fast-growth business with the associated communications demands.”
Stuart Wragg of Australian firm n2n communications is another who sees growth in the cleantech arena.
“As the need to reduce carbon emissions becomes increasingly important, expect to see an acceleration of innovation in clean-tech as businesses seek out solutions that help reduce carbon, increase efficiency and support Australia’s transition toward a low carbon economy,” he says. “And don’t expect big business to dominate share-of-voice. Watch out for start-ups too, keen to grab their share of the action with some disruptive communications.”
Jim Hawker, a principal at UK consultancy Threepipe, best known for its work in the consumer space, agrees. “Never before has technology been such a driver of innovation with consumer marketing,” he says. “The brands that are standing out and generating great results are those that are technology aware and incorporating new techniques to provide deeper engagement with consumers.
“The rise of smartphones and other mobile device adoption means we are all carrying around serious bits of kit that offer marketers a wonderful platform on which to engage with us both online and in the real world. That is such a powerful opportunity that is there for the taking and consumer and retail brands are moving fast to take advantage.”
Ubiquity and Convergence
According to Scott Friedman, regional director of Text 100 North America, “2012 is the year that technology moves from being an industry or a sector to being a core part of every business regardless of discipline. Twenty years ago technology was considered niche; 10 years ago it was considered one of the fastest growing industries. Today it’s everything.”
Global spending on consumer technology devices in 2012 is expected to top $1 trillion for the first time. “That’s not so surprising in a world where there are now more connected devices than people,” says Esty Pujadas, director of Ketchum’s global technology practice, who points to a variety of devices that will continue to proliferate in 2012: “The iPad 3, LTE smartphones, Internet-connected TVs, cars with augmented-reality windshields, fridges that know what’s inside.”
Pujadas predicts three trends: co-creation (“collaboration between tech companies and experts in seemingly unrelated fields, or customers, or major players in other industries”); appification (“how we now experience much of life through the mobile apps we download and how that shapes our expectations and behaviors”); and integration (“consumers want either all-in-one devices or at least a way to combine their personal data, and marketers want the insights that come from analyzing all that resulting Big Data”).
“These trends were front and center at the recent Consumer Electronics Show,” she says, “especially if you consider how many non-tech companies like Mercedes-Benz, Ford and Craftsman invested in having a big presence there. Technology will not only be a growth sector in 2012, it is also morphing and overlapping with other industry sectors at record speed.”
Heidi Sinclair, global technology practice chair at Weber Shandwick, is another who see the breakdown of traditional barriers between technology and other practice areas.
Sinclair says that the “consumerization” of IT “is forcing even non-consumer companies to market to consumers thus driving the need for consumer technology communications expertise.” At the same time, “we are squarely in The Age of Innovation where every company has an innovation story to tell. Technology communications know-how is being applied to everything from telling the science story in shampoo to developer communications programs geared to driving app development for automobiles.”
As a result Pujadas sees an advantage for full-service firms: “This is a real opportunity for agencies with multiple competencies: companies will need help going far beyond product marketing and communications– they’ll need help with corporate reputation, public affairs, issues and crisis—especially around data privacy and security—influencer strategies, audience targeting, social media and growth in emerging markets.”
Conversely, Text 100’s Friedman believes the ubiquity of technology and its convergence with other sectors and disciplines, presents an opportunity for tech specialists to expand.
“There isn't a single industry that isn't using technology to innovate and build competitive advantage,” he says. “And given that technology has been at the core of all that we do at Text 100, we're now seeing our expertise translate into the media, digital, automotive, travel and health sectors. Our 'technology' expertise is being sought in sectors we never thought possible.”
Content
The proliferation of high-tech devices has implications for technology marketers and for everyone engaged in communications—a topic we will cover at greater length in our look at what’s hot in digital and social media.
But if there’s one area of activity that’s particularly important for technology clients it’s the creation of original content that can be delivered across all of these platforms.
“We’re also seeing a surge in requests from our clients in all sectors—but in particular in technology—for content that can be used across platforms,” says Friedman. “This move to ‘branded journalism’ and multi-platform content shows the shift in the way technology companies are integrating their social, traditional, marketing and communications platforms to achieve most relevance with their audiences.”
International Markets
And while there are those who question the potential for growth in the US—Anne Green of New York-based CooperKatz & Company says that “over recent years our industry has benefited from growth in tech start-ups, but it remains to be seen whether that rate of growth is sustainable, particularly in key verticals that have seen a massive influx of new companies and competition”—there’s no doubt that there’s plenty of growth left in international markets.
The technology sector presents a significant opportunity in the Indian market, according to Varghese Cherian, who leads Edelman’s technology practice there. He estimates the 50 percent of the top PR spenders in India are technology companies.
That’s partly because technology companies are “early adopters of being in the forefront of accepting and trying out newer trends makes them the largest spenders” and also because “technology companies have been exposed to global trends more than other companies and hence they are part of an evolved cycle in using modern age communications tools and a strategic approach.”
The other BRIC markets—and China in particular—are also likely to see increased activity.
Coming Clean
Finally, when it comes to hot sectors in the technology arena, cleantech continues to generate the greatest interest. Sinclair says green technology “has moved from being in start-up mode to a fast-growth business with the associated communications demands.”
Stuart Wragg of Australian firm n2n communications is another who sees growth in the cleantech arena.
“As the need to reduce carbon emissions becomes increasingly important, expect to see an acceleration of innovation in clean-tech as businesses seek out solutions that help reduce carbon, increase efficiency and support Australia’s transition toward a low carbon economy,” he says. “And don’t expect big business to dominate share-of-voice. Watch out for start-ups too, keen to grab their share of the action with some disruptive communications.”
Technology :: The Future of Your PC's Hardware
What is it?
As its name implies, the memristor can "remember" how much current has passed through it. And by alternating the amount of current that passes through it, a memristor can also become a one-element circuit component with unique properties. Most notably, it can save its electronic state even when the current is turned off, making it a great candidate to replace today's flash memory.
Memristors will theoretically be cheaper and far faster than flash memory, and allow far greater memory densities. They could also replace RAM chips as we know them, so that, after you turn off your computer, it will remember exactly what it was doing when you turn it back on, and return to work instantly. This lowering of cost and consolidating of components may lead to affordable, solid-state computers that fit in your pocket and run many times faster than today's PCs.
Someday the memristor could spawn a whole new type of computer, thanks to its ability to remember a range of electrical states rather than the simplistic "on" and "off" states that today's digital processors recognize. By working with a dynamic range of data states in an analog mode, memristor-based computers could be capable of far more complex tasks than just shuttling ones and zeroes around.
When is it coming?
Researchers say that no real barrier prevents implementing the memristor in circuitry immediately. But it's up to the business side to push products through to commercial reality. Memristors made to replace flash memory (at a lower cost and lower power consumption) will likely appear first; HP's goal is to offer them by 2012. Beyond that, memristors will likely replace both DRAM and hard disks in the 2014-to-2016 time frame. As for memristor-based analog computers, that step may take 20-plus years
32-Core CPUs From Intel and AMD
What is it?
With the gigahertz race largely abandoned, both AMD and Intel are trying to pack more cores onto a die in order to continue to improve processing power and aid with multitasking operations. Miniaturizing chips further will be key to fitting these cores and other components into a limited space. Intel will roll out 32-nanometer processors (down from today's 45nm chips) in 2009.
When is it coming?
Intel has been very good about sticking to its road map. A six-core CPU based on the Itanium design should be out imminently, when Intel then shifts focus to a brand-new architecture called Nehalem, to be marketed as Core i7. Core i7 will feature up to eight cores, with eight-core systems available in 2009 or 2010. (And an eight-core AMD project called Montreal is reportedly on tap for 2009.)
After that, the timeline gets fuzzy. Intel reportedly canceled a 32-core project called Keifer, slated for 2010, possibly because of its complexity (the company won't confirm this, though). That many cores requires a new way of dealing with memory; apparently you can't have 32 brains pulling out of one central pool of RAM. But we still expect cores to proliferate when the kinks are ironed out: 16 cores by 2011 or 2012 is plausible (when transistors are predicted to drop again in size to 22nm), with 32 cores by 2013 or 2014 easily within reach. Intel says "hundreds" of cores may come even farther down the line.
As its name implies, the memristor can "remember" how much current has passed through it. And by alternating the amount of current that passes through it, a memristor can also become a one-element circuit component with unique properties. Most notably, it can save its electronic state even when the current is turned off, making it a great candidate to replace today's flash memory.
Memristors will theoretically be cheaper and far faster than flash memory, and allow far greater memory densities. They could also replace RAM chips as we know them, so that, after you turn off your computer, it will remember exactly what it was doing when you turn it back on, and return to work instantly. This lowering of cost and consolidating of components may lead to affordable, solid-state computers that fit in your pocket and run many times faster than today's PCs.
Someday the memristor could spawn a whole new type of computer, thanks to its ability to remember a range of electrical states rather than the simplistic "on" and "off" states that today's digital processors recognize. By working with a dynamic range of data states in an analog mode, memristor-based computers could be capable of far more complex tasks than just shuttling ones and zeroes around.
When is it coming?
Researchers say that no real barrier prevents implementing the memristor in circuitry immediately. But it's up to the business side to push products through to commercial reality. Memristors made to replace flash memory (at a lower cost and lower power consumption) will likely appear first; HP's goal is to offer them by 2012. Beyond that, memristors will likely replace both DRAM and hard disks in the 2014-to-2016 time frame. As for memristor-based analog computers, that step may take 20-plus years
32-Core CPUs From Intel and AMD
What is it?
With the gigahertz race largely abandoned, both AMD and Intel are trying to pack more cores onto a die in order to continue to improve processing power and aid with multitasking operations. Miniaturizing chips further will be key to fitting these cores and other components into a limited space. Intel will roll out 32-nanometer processors (down from today's 45nm chips) in 2009.
When is it coming?
Intel has been very good about sticking to its road map. A six-core CPU based on the Itanium design should be out imminently, when Intel then shifts focus to a brand-new architecture called Nehalem, to be marketed as Core i7. Core i7 will feature up to eight cores, with eight-core systems available in 2009 or 2010. (And an eight-core AMD project called Montreal is reportedly on tap for 2009.)
After that, the timeline gets fuzzy. Intel reportedly canceled a 32-core project called Keifer, slated for 2010, possibly because of its complexity (the company won't confirm this, though). That many cores requires a new way of dealing with memory; apparently you can't have 32 brains pulling out of one central pool of RAM. But we still expect cores to proliferate when the kinks are ironed out: 16 cores by 2011 or 2012 is plausible (when transistors are predicted to drop again in size to 22nm), with 32 cores by 2013 or 2014 easily within reach. Intel says "hundreds" of cores may come even farther down the line.
Feb 22, 2012
Technology :: Neutrality Rules Slated to Take Effect this Fall
Last week, the FCC published
its final Open Internet rules in the Federal Register, which means they
will formally go into effect later this fall. The publication caps off
a two-year process at the Commission to get the rules in place. While
the rules won’t change much in terms of day-to-day use of the Internet,
it is good news for consumers and innovators that they will at long
last be enforceable.
The rules essentially preserve the status quo online. They prevent cable, DSL, and fiber carriers from favoring or disfavoring certain sites or applications over others and prevent mobile carriers from blocking websites or competing voice and video applications – leaving consumers to decide which services they might prefer. The only significant change will be that now, if carriers engage in discriminatory routing or network management practices, those whose traffic is affected will have a place to go to demand recourse.
The rules themselves reflect a light-touch and flexible approach to preserving the competitive environment that currently exists on the Internet. The rules do not, as some critics declare, amount to “regulating the Internet,” and there is ample evidence that in the absence of rules carriers might discriminate (as a few have done already) against some lawful traffic.
The rules are set to go into effect on November 20, but their formal publication also starts another, more ominous clock. After October 13, Internet neutrality opponents in the Senate will be able to force a vote on a joint resolution under the Congressional Review Act that would repeal the rules and strip the FCC of the authority to make similar rules in the future. (The resolution passed the House along party lines in the spring.) Just as significant, the publication of the rules also starts the clock on litigation, as Verizon and any other parties wishing to challenge the rules in court are now free to file suit.
Repealing the rules would be a huge mistake that would mark a dramatic change in U.S. communications policy. As we’ve written before (here and here), to strip the nation’s communications regulator of any authority over what is rapidly becoming the core communications network of the 21st century would be absurd. Not having any authority looking out for Internet users’ best interests would leave carriers free to discriminate amongst Internet applications, picking winners and losers, to the detriment of consumer choice, competition, and online innovation.
The rules essentially preserve the status quo online. They prevent cable, DSL, and fiber carriers from favoring or disfavoring certain sites or applications over others and prevent mobile carriers from blocking websites or competing voice and video applications – leaving consumers to decide which services they might prefer. The only significant change will be that now, if carriers engage in discriminatory routing or network management practices, those whose traffic is affected will have a place to go to demand recourse.
The rules themselves reflect a light-touch and flexible approach to preserving the competitive environment that currently exists on the Internet. The rules do not, as some critics declare, amount to “regulating the Internet,” and there is ample evidence that in the absence of rules carriers might discriminate (as a few have done already) against some lawful traffic.
The rules are set to go into effect on November 20, but their formal publication also starts another, more ominous clock. After October 13, Internet neutrality opponents in the Senate will be able to force a vote on a joint resolution under the Congressional Review Act that would repeal the rules and strip the FCC of the authority to make similar rules in the future. (The resolution passed the House along party lines in the spring.) Just as significant, the publication of the rules also starts the clock on litigation, as Verizon and any other parties wishing to challenge the rules in court are now free to file suit.
Repealing the rules would be a huge mistake that would mark a dramatic change in U.S. communications policy. As we’ve written before (here and here), to strip the nation’s communications regulator of any authority over what is rapidly becoming the core communications network of the 21st century would be absurd. Not having any authority looking out for Internet users’ best interests would leave carriers free to discriminate amongst Internet applications, picking winners and losers, to the detriment of consumer choice, competition, and online innovation.
Feb 21, 2012
Science :: Boy genius's book reveals life in college at age 8
The one thing 14-year-old Moshe Kai Cavalin dislikes is being called a genius.
It
took four years to finish, in part because Cavalin, whose mother is
Chinese, decided to publish it in Mandarin, and doing the translation
himself was laborious.
All he did, after all, was enroll in college at age 8 and earn his first of two Associate of Arts degrees from East Los Angeles Community College in 2009 at age 11, graduating with a perfect 4.0 grade point average.
Now, at 14, he's poised to graduate from UCLA this year. He's also just published an English edition of his first book, "We Can Do."
The 100-page guideline explains how other young people can accomplish what Cavalin
did through such simple acts as keeping themselves focused and
approaching everything with total commitment. He's hoping it will show
people there's no genius involved, just hard work.
"That's
always the question that bothers me," Cavalin, who turned 14 on
Tuesday, says when the G-word is raised. "People need to know you don't
really need to be a genius. You just have to work hard and you can
accomplish anything."
And maybe cut out some of the TV.
Although he's a big fan of Jackie Chan movies, Cavalin says he limits his television time to four hours a week.
Not
that he lacks for recreational activities or feels that his parents
pressured him into studying constantly. He writes in "We Can Do" of
learning to scuba dive, and he loves soccer and martial arts. He used
to participate in the latter sport when he was younger, winning
trophies for his age group, until his UCLA studies and his writing made
things a little too hectic.
Indeed one of the key messages of his book is to stay focused and to not take on any endeavor half-heartedly.
"I was able to reach the stars, but others can reach the 'Milky Way," he tells readers.
It was a professor at his first institution of higher learning, East Los Angeles City College,
who inspired him, Cavalin says. He didn't like the subject but managed
to get an A in it anyway, by applying himself and seeing how
enthusiastic his teacher, Richard Avila, was about the subject.
Avila, he says, inspired him to write a book explaining his methods for success so he could motivate others.
Han
Shian Culture Publishing of Taiwan put the book in print, and it did
well in Taiwan, Singapore and Malaysia, as well as in several
bookstores in Southern California's Asian communities. He then brought
it out in English for the U.S. market.
Because
of his heavy study load, Cavalin has had little opportunity to promote
the book, other than a signing at UCLA, where he also lives in student
housing with his parents and attends the school on a scholarship.
After
earning his bachelor's degree, the math major plans to enroll in
graduate school with hopes of eventually earning a degree.
After that, he's not so sure. He points out that he's still just barely a teenager.
"Who
knows?" he says, chuckling at the thought of what lies ahead in
adulthood. "That's a very distant future, and I'm pretty much planning
for just the next few years. That's too far into the future for me to
see."
Feb 20, 2012
Management :: Best Practices and Common Pitfalls Associated with Suppliers Involvement in NPD
Involving suppliers in new product development provides organizations
with a range of benefits, including shorter development time, better
quality products, and lower cost of development. In this new in-depth
article Dr Sanda Berar delves deeper into some of the best practices
and the most common pitfalls associated with suppliers’ involvement in
NPD.
Strategic technology suppliers provide organizations with access to key external technologies while also supporting open innovation. In turn, open innovation is seen as critical for increasing a company’s competitive advantage.
This article sets out to explore some of the best practices and some of the most common pitfalls associated with suppliers’ involvement in new product development (NPD). To achieve this, seven case studies of NPD in one organization are discussed in the article. The discussion of the cases focuses on the role that project level factors play in shaping suppliers’ involvement in NPD.
Strategic technology suppliers provide organizations with access to key external technologies while also supporting open innovation. In turn, open innovation is seen as critical for increasing a company’s competitive advantage.
This article sets out to explore some of the best practices and some of the most common pitfalls associated with suppliers’ involvement in new product development (NPD). To achieve this, seven case studies of NPD in one organization are discussed in the article. The discussion of the cases focuses on the role that project level factors play in shaping suppliers’ involvement in NPD.
Several factors are considered here:
- correct evaluation of supplier’s technology versus the product requirements;
- correct evaluation of supplier’s competence versus product requirements;
- trust between buyer and supplier;
- prior knowledge of supplier;
- complexity of suppliers’ delivery chain and of the R&D set-up;
- buyer-supplier power balance;
- supplier’s absorptive innovation capabilities.
About the author
Dr. Sanda Berar has over 15 years of experience in high-tech industry and holds a PhD in Economics and an MSc in Computer Engineering. She is presently with Nokia in Helsinki, heading the software department in a product unit. Previously, Sanda had worked in several high-tech companies in Romania. Between 1994-2000, she was a lecturer at Babes-Bolyai University, Romania. She also holds an honoree research fellow title with the University of Aberdeen, Business School, where she is involved in studies related to NPD.Science :: DNA in Beethoven’s hair sequenced to make one last song
There's a piece of Ludwig van Beethoven
that still exists almost 200 years after his death — a lock of his
hair. The hair even survived through The Holocaust thanks to an
enterprising prisoner who knew the safest place to hide it was inside
his behind! But that's not the most fantastic part of the story. After
the lock of hair was sold at an auction in 2009, a team of artists and
musicians got their hands on a piece of it, submitted it for a DNA analysis, and actually composed a piece called Ludwig's Last Song using the results.
The details of Beethoven's genes were given to Scots composer Stuart Mitchell who was involved in Cymatics, which is essentially the study of visible sound and vibration. Mitchell decided to assign a note to each of the 22 unique amino acids he found in the DNA sequence. Every note was placed on a musical staff that corresponded to the resonance frequency of its amino acid. Using these notes, he then arranged a piece for the viola and a piece for the piano, which when combined made up the finished product that's available for purchase.
But Beethoven fans take heed: the song doesn't sound anything like the maestro's compositions, as you can hear in the video above. As Mitchell said, "Everyone expected to hear it in the style of Beethoven but the melody is almost tragic. To me it sounds like somebody fighting, struggling, a really sympathetic melody with a great deal of soul."
The details of Beethoven's genes were given to Scots composer Stuart Mitchell who was involved in Cymatics, which is essentially the study of visible sound and vibration. Mitchell decided to assign a note to each of the 22 unique amino acids he found in the DNA sequence. Every note was placed on a musical staff that corresponded to the resonance frequency of its amino acid. Using these notes, he then arranged a piece for the viola and a piece for the piano, which when combined made up the finished product that's available for purchase.
But Beethoven fans take heed: the song doesn't sound anything like the maestro's compositions, as you can hear in the video above. As Mitchell said, "Everyone expected to hear it in the style of Beethoven but the melody is almost tragic. To me it sounds like somebody fighting, struggling, a really sympathetic melody with a great deal of soul."
Feb 19, 2012
Why Filtering Is Not the Solution
Bill Keller, former executive editor of The New York Times, recently responded to detractors of a column he penned on PIPA/SOPA.
In a blog post, Keller writes, "Much of the mail bristles with resentment of the corporate behemoths that have tried to protect music, film and books by building higher legal walls around their property."
CDT is among the immense and philosophically diverse crowd that bristled during the PIPA/SOPA debate. By now it should be well understood that our objections weren’t focused on rightsholders' desire to protect their work, but rather on the methods by which they wanted to protect it.
Keller continues, "[I]t should be well within the capability of the Internet giants to filter their traffic for the most egregious pirates, just as good citizenship (and in some cases the law) would oblige a bus company to notify police if the bus line was being used to facilitate a crime. At least it’s worth exploring."
In fact, relying on "Internet giants" to filter user traffic is far more problematic than it first appears. This is precisely what was "explored" in the PIPA/SOPA debate – and precisely what made the bills so controversial.
(Incidentally, the analogy to a bus company is off base, in part because it assumes that it is obvious to the bus company when riders are engaged in criminal activity. A better analogy would be requiring the bus company to search riders for contraband or demand that riders disclose the purpose of each trip. In addition, Internet filtering carries technical and international consequences that have no parallel in local bus routes.)
Proponents of PIPA/SOPA rallied behind the proposal that Internet service providers (IPSs) be required by law to engage in domain name system (DNS) filtering; that is, ISPs would interfere with the Internet’s addressing system to prevent domain names from connecting to their corresponding numerical IP addresses. As CDT has explained, including in testimony and letters to Congress, DNS filtering would threaten significant collateral damage without any serious prospect of achieving meaningful reduction in infringement.
DNS filtering is ineffective because there are a variety of easy techniques to circumvent it, including using a simple browser plug-in or bookmarking a site’s IP address. Meanwhile, Sandia National Labs and some of the Internet’s most respected engineers have warned that it would undermine cybersecurity. The White House, after lengthy analysis of the matter, concluded that "[p]roposed laws must not tamper with the technical architecture of the Internet through manipulation of the . . . DNS" because such provisions "pose a real risk to cybersecurity and yet leave contraband goods and services accessible online."
Moreover, if the U.S. government were to embrace this technique to block websites that it deems—to use Keller’s term—"egregious," it would be that much more difficult to advocate that oppressive regimes not block sites they find egregious. As Julian Sanchez of Cato has noted, if we were to embrace DNS filtering, the only thing separating oppressive regimes and the U.S. in this case would be what’s on our respective blacklists. And if each country enforces its own blacklist, the end result would be a highly balkanized Internet. The U.S. State Department is the leading global advocate for a unified, global Internet; embracing a technique that fragments the Internet would seriously erode U.S. credibility in this cause.
Are there other, non-DNS techniques ISPs could use to filter user traffic? Well, they could engage in deep packet inspection to monitor user behavior and ferret out illegal activity. But this kind of pervasive surveillance comes at a high cost to privacy and, like DNS filtering, sets a dangerous international precedent.
Even for Internet entities other than ISPs, filtering carries significant policy implications. Companies required to monitor and filter illegal activity may well overblock in order to play it safe. To ensure they won't be accused of shirking their responsibilities, the tendency will be to block disputed material or anything that carries even a whiff of legal controversy. The risk of lawful speech getting caught up in the filters is especially high when, as with PIPA/SOPA, the filters aim to block entire domain names. Domain names are often shared among multiple users of separate subdomains, and also among multiple uses (such as a company’s public-facing website and internal email server). Domain name filtering is a very blunt instrument that can sweep in lawful content inadvertently.
Finally, we should all be realistic about the long-term consequences at stake here. If we establish both the technical infrastructure and the legal and social norms to support pervasive Internet filtering, its use will be demanded for a wide range of causes. There is, after all, no shortage of undesirable content and behavior online. With "Internet giants" now tasked with online policing functions, the online environment would effectively become subject to a new set of centralized gatekeepers.
That may be an appealing vision to some, but it would jeopardize the many benefits of the Internet’s open and decentralized nature. CDT would urge Keller, and other thought leaders delving into this area of policy as a result of the PIPA/SOPA uproar, not to cast aside the core principles that make the Internet such a powerful force for free expression and innovation. A good place to start in understanding those principles, we would humbly suggest, is the CDT document "What Every Policymaker Needs to Know About the Internet."
None of this is to say that nothing can be done about combating offshore piracy. CDT has suggested that a "follow the money" approach offers a higher likelihood of effectiveness with less collateral damage. But hopefully one of the biggest lessons that has emerged from the months-long national debate over PIPA/SOPA—which included a sea of articles, blog posts, and reports about the risks the bills posed—is that filtering the Internet is not the answer.
Update
CDT Fellow David Post, who helped organize the law professors' letters in opposition to PROTECT IP and SOPA, has posted his take on the bills' flaws and bade them good riddance over at Justia's Verdict blog.
In a blog post, Keller writes, "Much of the mail bristles with resentment of the corporate behemoths that have tried to protect music, film and books by building higher legal walls around their property."
CDT is among the immense and philosophically diverse crowd that bristled during the PIPA/SOPA debate. By now it should be well understood that our objections weren’t focused on rightsholders' desire to protect their work, but rather on the methods by which they wanted to protect it.
Keller continues, "[I]t should be well within the capability of the Internet giants to filter their traffic for the most egregious pirates, just as good citizenship (and in some cases the law) would oblige a bus company to notify police if the bus line was being used to facilitate a crime. At least it’s worth exploring."
In fact, relying on "Internet giants" to filter user traffic is far more problematic than it first appears. This is precisely what was "explored" in the PIPA/SOPA debate – and precisely what made the bills so controversial.
(Incidentally, the analogy to a bus company is off base, in part because it assumes that it is obvious to the bus company when riders are engaged in criminal activity. A better analogy would be requiring the bus company to search riders for contraband or demand that riders disclose the purpose of each trip. In addition, Internet filtering carries technical and international consequences that have no parallel in local bus routes.)
Proponents of PIPA/SOPA rallied behind the proposal that Internet service providers (IPSs) be required by law to engage in domain name system (DNS) filtering; that is, ISPs would interfere with the Internet’s addressing system to prevent domain names from connecting to their corresponding numerical IP addresses. As CDT has explained, including in testimony and letters to Congress, DNS filtering would threaten significant collateral damage without any serious prospect of achieving meaningful reduction in infringement.
DNS filtering is ineffective because there are a variety of easy techniques to circumvent it, including using a simple browser plug-in or bookmarking a site’s IP address. Meanwhile, Sandia National Labs and some of the Internet’s most respected engineers have warned that it would undermine cybersecurity. The White House, after lengthy analysis of the matter, concluded that "[p]roposed laws must not tamper with the technical architecture of the Internet through manipulation of the . . . DNS" because such provisions "pose a real risk to cybersecurity and yet leave contraband goods and services accessible online."
Moreover, if the U.S. government were to embrace this technique to block websites that it deems—to use Keller’s term—"egregious," it would be that much more difficult to advocate that oppressive regimes not block sites they find egregious. As Julian Sanchez of Cato has noted, if we were to embrace DNS filtering, the only thing separating oppressive regimes and the U.S. in this case would be what’s on our respective blacklists. And if each country enforces its own blacklist, the end result would be a highly balkanized Internet. The U.S. State Department is the leading global advocate for a unified, global Internet; embracing a technique that fragments the Internet would seriously erode U.S. credibility in this cause.
Are there other, non-DNS techniques ISPs could use to filter user traffic? Well, they could engage in deep packet inspection to monitor user behavior and ferret out illegal activity. But this kind of pervasive surveillance comes at a high cost to privacy and, like DNS filtering, sets a dangerous international precedent.
Even for Internet entities other than ISPs, filtering carries significant policy implications. Companies required to monitor and filter illegal activity may well overblock in order to play it safe. To ensure they won't be accused of shirking their responsibilities, the tendency will be to block disputed material or anything that carries even a whiff of legal controversy. The risk of lawful speech getting caught up in the filters is especially high when, as with PIPA/SOPA, the filters aim to block entire domain names. Domain names are often shared among multiple users of separate subdomains, and also among multiple uses (such as a company’s public-facing website and internal email server). Domain name filtering is a very blunt instrument that can sweep in lawful content inadvertently.
Finally, we should all be realistic about the long-term consequences at stake here. If we establish both the technical infrastructure and the legal and social norms to support pervasive Internet filtering, its use will be demanded for a wide range of causes. There is, after all, no shortage of undesirable content and behavior online. With "Internet giants" now tasked with online policing functions, the online environment would effectively become subject to a new set of centralized gatekeepers.
That may be an appealing vision to some, but it would jeopardize the many benefits of the Internet’s open and decentralized nature. CDT would urge Keller, and other thought leaders delving into this area of policy as a result of the PIPA/SOPA uproar, not to cast aside the core principles that make the Internet such a powerful force for free expression and innovation. A good place to start in understanding those principles, we would humbly suggest, is the CDT document "What Every Policymaker Needs to Know About the Internet."
None of this is to say that nothing can be done about combating offshore piracy. CDT has suggested that a "follow the money" approach offers a higher likelihood of effectiveness with less collateral damage. But hopefully one of the biggest lessons that has emerged from the months-long national debate over PIPA/SOPA—which included a sea of articles, blog posts, and reports about the risks the bills posed—is that filtering the Internet is not the answer.
Update
CDT Fellow David Post, who helped organize the law professors' letters in opposition to PROTECT IP and SOPA, has posted his take on the bills' flaws and bade them good riddance over at Justia's Verdict blog.
Management :: Four Approaches to Fostering Companies’ Innovation Capability
For many companies being and continuing to be innovative is essential.
But how can companies become ‘more’ innovative? Carmen Kobe & Ina
Goller asked experts, consultants, and managers how they experienced
successful improvement of innovativeness in their business life. Out of
the interviews four interventions that are used to develop companies’
innovation capabilities could be extracted.
The interventions presented in this in-depth article are aimed at developing companies’ innovation capabilities, and focus on the ability to develop new products, services and processes and bring them successfully to market:
The outcomes of interventions aimed at innovation will depend on the company’s strategies, abilities and resources. Sustainable innovation activity requires a combination of efforts and abilities. Change requires ongoing, sustained efforts to change based on thorough analysis of existing systems, establishment of new goals, changes and time for effects to emerge. The pace of change in people and everyday behaviors will be moderate, but even small improvements in innovation capability can produce greatly improved results.
By Carmen Kobe & Ina Goller
The interventions presented in this in-depth article are aimed at developing companies’ innovation capabilities, and focus on the ability to develop new products, services and processes and bring them successfully to market:
- selection and training of human resources;
- implementation of new structures and practices;
- creation and implementation of innovation ideas;
- establishing new values and norms.
The outcomes of interventions aimed at innovation will depend on the company’s strategies, abilities and resources. Sustainable innovation activity requires a combination of efforts and abilities. Change requires ongoing, sustained efforts to change based on thorough analysis of existing systems, establishment of new goals, changes and time for effects to emerge. The pace of change in people and everyday behaviors will be moderate, but even small improvements in innovation capability can produce greatly improved results.
By Carmen Kobe & Ina Goller
Feb 18, 2012
Paving the Road to the 'Learning Health Care System'
Last week, I participated on a panel as part of the 2012 Stanford Law Review Symposium, The Privacy Paradox: Privacy and its Conflicting Values.
The symposium was co-hosted by the Stanford Center for Internet and
Society. Kudos to the Stanford Law Review for including a panel on
health privacy along with panels on uses of drones for surveillance;
big data, politics and privacy; and privacy and conflicts with First
Amendment and tort law.
Participants on the panel were invited to submit essays for publication in the symposium edition of the Stanford Law Review Online. CDT’s essay explores the way the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule discourages health care providers from sharing the results of analyses conducted on electronic medical record data for quality improvement purposes. CDT recommends that such uses of electronic health information be governed by consistent policies based on fair information practices, even in circumstances where the aggregate results are intended to be shared to benefit the health care system. The essay, Paving the Regulatory Road to the “Learning Health Care System,” builds on work led by CDT in its role as chair of the federal Health IT Policy Committee’s Privacy and Security Tiger Team.
Participants on the panel were invited to submit essays for publication in the symposium edition of the Stanford Law Review Online. CDT’s essay explores the way the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule discourages health care providers from sharing the results of analyses conducted on electronic medical record data for quality improvement purposes. CDT recommends that such uses of electronic health information be governed by consistent policies based on fair information practices, even in circumstances where the aggregate results are intended to be shared to benefit the health care system. The essay, Paving the Regulatory Road to the “Learning Health Care System,” builds on work led by CDT in its role as chair of the federal Health IT Policy Committee’s Privacy and Security Tiger Team.
How to Successfully Implement Collaborative Idea Management
Are you effectively using the creative potential of your employees,
customers and partners to address your innovation challenges?
Collaborative idea management is a method to learn from the “wisdom of
the crowd” in order to drive innovation. This in-depth article gives
you an introduction including best practice from how Ericsson, the
global telecom company worked with introducing and designing a
successful collaborative idea management system.
Many organizations are facing an urgent need to exploit new ideas and opportunities to meet increasing competitive pressure and changing customer demands. The recent economic recession has further accelerated the urgency of innovation across industries and globally. But from where do you get those much needed breakthrough ideas to drive growth, productivity and value creation? When innovation is more important than ever, collaborative idea management can help organizations to surface new ideas, improve them and make sure they reach the right people. It is also a way to empower and recognize innovative employees, to measure and stimulate creative activity and to promote a more open and collaborative innovation culture in the organization.
Many organizations are facing an urgent need to exploit new ideas and opportunities to meet increasing competitive pressure and changing customer demands. The recent economic recession has further accelerated the urgency of innovation across industries and globally. But from where do you get those much needed breakthrough ideas to drive growth, productivity and value creation? When innovation is more important than ever, collaborative idea management can help organizations to surface new ideas, improve them and make sure they reach the right people. It is also a way to empower and recognize innovative employees, to measure and stimulate creative activity and to promote a more open and collaborative innovation culture in the organization.
What is idea management?
Idea management is a structured process for the collection, handling, selection and distribution of ideas. It may include support for gathering, storing, improving, evaluating and prioritizing ideas by providing methods and tools, such as templates and guidelines. Idea management is an integrated part of the innovation process. Idea management is relevant for all types of ideas, from incremental improvements to new and disruptive business opportunities. The scope can range from being limited to one internal unit, to cover the entire organization, to include also external stakeholders, such as customers and partners. Some companies that have implemented idea management systems are IBM, Accenture and Whirlpool. Ericsson started collaborative idea management in 2008 by developing a generic solution aligned with the existing collaboration platform and strategy.Addressing challenges
The handling of ideas in organizations involves several challenges. First, the more people you engage the more difficult it gets to evaluate and give feedback on all the ideas. You need an alternative to channeling all idea through one central point that quickly gets choked. Second, larger organizations typically have numerous and diverse innovation needs throughout the organization. Defining the innovation needs is a critical success factor to focus idea management efforts on the relevant themes and challenges. But how do you channel the right ideas to the right places when the landscape of innovation needs is not easily defined top down? Third, for an idea management system to be sustainable you need to look beyond an IT solution. You need an infrastructure with guidelines and processes that is integrated with the overall innovation and collaboration efforts and aligned with organizational culture. That is easier said than done! A fourth challenge is to engage employees to come up with new ideas and to contribute them. In a recent blog post, Hutch Carpenter argues that employees are intrinsically motivated to come up with new ideas. Every day, employees think of ideas relevant to the organization. It is just happening as a part of their daily work. The challenge for organizations is to harness these motivations and provide an outlet for them. Finally, organizations must ensure that their collaborative idea management initiatives actually deliver on the “wisdom of the crowd” promise. If the perception is that you just get more mediocre ideas, the effort will not be long-lived.Designing a system
The basic components of collaborative idea management is support for users to submit new ideas, comment and develop already existing ones as well as support for managers to capture, track and further develop promising ideas. Finally you need support to administrate, measure and follow-up. The following design rules, based on the initial experience at Ericsson and insights from other organizations, might help when considering getting serious about idea management.- Invite everyone to engage the entire organization. Several studies show that employees are the number one source of innovative ideas.
- Use the principle of self-organization to handle complexity. Let innovation needs be defined bottom-up and use the IT tool to match idea supply and demand.
- Embrace collaboration to leverage expertise and a diversity of perspectives. Openness will enable users from different parts of the organization to improve and comment on ideas.
- Secure feedback and recognition for a sustainable initiative. Make sure idea owners can see everything what is happening to their ideas. Reward good ideas.
- Integrate idea management into your overall collaboration effort. Benefits are a connected workflow, unified user interface and simplified support.
Release the creativity
It makes sense to try to utilize the collective creativity of all employees and even include external stakeholders to generate those much needed breakthrough ideas. Under the right conditions, a collaborative idea management system of tools and practices can help you to do just that. Tapping into the innovation energy of employees, customers and partners might improve your ability to respond to what emerges, find differentiating opportunities, drive a culture of collaboration and innovation, and create a sense that every employee contribution is important for the future of the organization. By Karin Wall, chief editorFurther reading
The article Collaborative Idea Management – using the creativity of crowds to drive innovation is written by Magnus Karlsson, Director New Business Development & Innovation at Ericsson Headquarters in Stockholm, Sweden. The article discusses an approach to collaborative idea management based on the initial experience by Ericsson and insights from other organizations. The article will enable you to:- get a basic understanding of both the problems and solutions connected to collaborative idea management
- achieve more constructive and higher quality management team discussion by providing a common ground and a common language about preparing for collaborative idea management
- better reflect on the structure of your company and take action to create an environment that supports collaborative idea management
- prepare for the challenges, and avoid repeating the mistakes of others
- identify the vital steps that needs to be considered when designing and implementing a collaborative idea management system.
Subscribe to:
Posts (Atom)
IT Conversations
Moneycontrol Latest News
Latest new pages on Computer Hope
Latest from Infoworld
Door Lock
Door Lock Import Top Door Lock from China Contact Quality Manufacturers Now