Custom Search

News World

Oct 25, 2012

Price increases help Whirlpool counter weak demand


Whirlpool Corp (WHR.N) reported a higher-than-expected quarterly profit on Tuesday as it benefited from price increases and improved productivity, prompting the world's largest appliance maker to raise its earnings outlook for the year.

The news, which boosted Whirlpool shares to their highest level since April 2011, came the day after Swedish rival Electrolux (ELUXb.ST) said it expected demand to stay weak in Europe and that it planned to push ahead with cost and production cuts in that market.
Appliance makers have struggled with higher raw materials costs and tepid demand in Europe, forcing them to raise prices and rely more on still-growing markets like Latin America.

Whirlpool, the maker of Maytag and KitchenAid appliances, had raised prices in July. Both Electrolux and Whirlpool had also done so last year to pass soaring raw material costs on to customers. At the time, analysts worried if the moves could hurt their market shares, especially as South Korean rivals such as LG Electronics Inc (066570.KS) and Samsung Electronics Co Ltd (005930.KS) kept prices unchanged.

Despite the skepticism, Whirlpool has managed to push through substantial price increases, especially in its largest market, North America, said NBG Productions chief equities analyst Brian Sozzi.
"They had ... the guts to raise prices in 2011 in a market overall that was not very receptive, and they stuck," Sozzi said.

On a conference call, Whirlpool executives admitted that the company had lost some market share in North America and Europe.

"Our share loss is actually very small," said Marc Bitzer, president of Whirlpool's North America unit. In North America, he said, most of the market share loss was in the lower-end appliance segment, and Whirlpool actually gained share at the higher end.
Sales declined in the third quarter, as they did for many other manufacturers exposed to Europe and changes in foreign exchange rates. Whirlpool's sales fell to $4.50 billion from $4.63 billion, while analysts were looking for $4.58 billion.

The company continues to expect 2012 industry unit shipments to be flat to down 2 percent in the United States and to show an increase of 7 percent to 10 percent in Latin America. It forecast declines of 5 percent to 7 percent in Asia and 2 percent to 3 percent in Europe, the Middle East and Africa.


TIGHT LID ON COSTS

To cope with uneven demand around the globe, Whirlpool has focused on cutting costs. Last fall, the company took some drastic actions, from reducing manufacturing capacity to axing about a tenth of its workforce in North America and Europe.

Whirlpool has also closed some manufacturing facilities in North America and moved some production to lower-cost countries such as Mexico. In recent years, it also started using common parts across its lineup of dishwashers, refrigerators and washing machines.
Whirlpool's third-quarter net earnings fell to $74 million, or 94 cents a share, from $177 million, or $2.27 a share, a year earlier.

Excluding restructuring expenses, Brazilian tax credits and other special items, the company said it had earned $1.80 a share. Analysts on average were looking for a profit of $1.60 on that basis, according to Thomson Reuters I/B/E/S.

For the full year, Whirlpool forecast earnings of $6.90 to $7.10 a share, excluding items, up from its prior outlook of $6.50 to $7.00.

Whirlpool shares were up 4.8 percent at $90.45 Tuesday on the New York Stock Exchange. They touched a high of $90.83 earlier in the session.

Related Post : REut

UPS :: UPS profit drops, shares rise on outlook


United Parcel Service Inc (UPS.N) reported lower quarterly profit on Tuesday, citing slowing global trade, and said there was "some uncertainty" about the strength of the coming holiday season.

The share price rose 2.7 percent after UPS slightly revised its 2012 forecast, signaling to Wall Street it would top the consensus estimate for the fourth quarter, which includes the important holiday shipping season.

Third-quarter earnings per share at the world's largest package delivery group, matched estimates, but quarterly revenue fell from a year ago and missed the Wall Street view.
UPS and rival FedEx Corp (FDX.N) are viewed as economic bellwethers because of the volume of goods they handle. The value of packages that UPS moves on its trucks and planes is equivalent to about 6 percent of U.S. gross domestic product and 2 percent of global GDP.

"We're seeing a slower growth environment and customers continuing to shift to slower modes of transport," said Edward Jones analyst Logan Purk. "Freight still moves, but on a cheaper mode of transport that affects revenues and therefore profitability."

Purk put a "hold" rating on the stock, citing the pending 5.2 billion euro ($6.7 billion) takeover of Dutch peer TNT Express (TNTE.AS) - the biggest purchase in UPS's 105-year history - as well as the slowing economy and cautious customers.

The year-end "fiscal cliff" drama in Washington over steep spending cuts and expiring tax breaks worried UPS executives.

"The lack of clear direction on future tax and spending policy has (slowed) and will continue to slow business investment," Chief Executive Scott Davis told analysts on a conference call. "The lack of political will to fix our debt problem adds to the uncertainty in our economy. Just what we don't need."

UPS sees adjusted full-year earnings at between $4.55 and $4.65 per share, which would be 5 percent to 7 percent above the 2011 figure. Its prior forecast ranged from $4.50 to $4.70.

Analysts estimated $4.56 a share For the year, according to Thomson Reuters I/B/E/S.
"That guidance implies $1.34 to $1.44 per share in the fourth quarter, compared with the $1.34 Wall Street consensus, and that's what's driving the optimism," Purk said.
UPS said it expects to handle more than 500 million packages between Thanksgiving and Christmas, and said it would release more estimates and holiday hiring plans within a few weeks.

FedEx said on Monday it expects to handle more than 280 million shipments in the period and intends to add 20,000 seasonal workers.

"While there is some uncertainty around the magnitude of the holiday shopping season, we are confident in UPS's ability to deliver," Chief Financial Officer Kurt Kuehn said.
The stock rose 2.7 percent to $73.50 in late-morning trade.

Oct 24, 2012

VMware profit beats estimates, names new CFO


Software maker VMware Inc's (VMW.N) third-quarter profit beat estimates on stronger-than-expected sales to the U.S. government, but it forecast current-quarter revenue in a range largely below expectations.

VMware also named Jonathan Chadwick as its new chief financial officer on Tuesday. Chadwick has previously served as Microsoft Corp's (MSFT.O) corporate vice president and as the CFO of Skype.

"The CFO has been a vacant position in the company for quite some time and its good to see a CFO hired with good solid industry experience," ISI Group analyst Brian Marshall said.
VMware, a publicly traded division of storage giant EMC Corp (EMC.N), forecast fourth quarter revenue of $1.26 billion to $1.29 billion, compared with analysts' expectations of $1.28 billion, according to Thomson Reuters I/B/E/S.

Net profit fell to $157 million, or 36 cents per share, from $178 million, or 41 cents per share, a year earlier. Revenue rose 20 percent to $1.13 billion from $942 million.
Excluding items, the company earned 70 cents per share.

Analysts, on average, had forecast $1.13 billion in revenue and earnings of 63 cents per share.
U.S. revenues grew 25 percent to $554 million from the year-ago quarter, the company said.
"We had a solid third quarter with a slightly better than anticipated business with the U.S. federal government," Chief Operating Officer Carl Eschenbach said on a conference call with analysts. "U.S. federal government bookings grew slightly year-over-year and finished higher than expected."

"Offset by challenges in some countries, particularly within the EMEA (Europe, Middle East & Africa) region and Australia, we remain cautious about the potential for slower IT spending as we exit 2012 and enter 2013," he added.

VMware is the biggest maker of the so-called virtualization software that reduces the number of servers companies need. It competes with Oracle Corp (ORCL.O) in that market and with Salesforce.com Inc (CRM.N) in offering "cloud" computing services.
Shares of the Palo Alto, California-based company were up 2 percent at $85.41 in extended trade. It closed at $83.72 on Tuesday on the New York Stock Exchange.

Related Post : reut

Java SE : October 2012 Critical Patch Update and Critical Patch Update for Java SE Released

As a reminder, the release of security patches for Java SE continues to be on a different schedule than for other Oracle products due to commitments made to customers prior to the Oracle acquisition of Sun Microsystems.  We do however expect to ultimately bring Java SE in line with the regular Critical Patch Update schedule, thus increasing the frequency of scheduled security releases for Java SE to 4 times a year (as opposed to the current 3 yearly releases).  The schedules for the “normal” Critical Patch Update and the Critical Patch Update for Java SE are posted online on the Critical Patch Updates and Security Alerts page. 

The October 2012 Critical Patch Update provides a total of 109 new security fixes across a number of product families including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Customer Relationship Management (CRM), Oracle Industry Applications, Oracle FLEXCUBE, Oracle Sun products suite, Oracle Linux and Virtualization, and Oracle MySQL. 


Out of these 109 new vulnerabilities, 5 affect Oracle Database Server.  The most severe of these Database vulnerabilities has received a CVSS Base Score of 10.0 on Windows platforms and 7.5 on Linux and Unix platforms.  This vulnerability (CVE-2012-3137) is related to the “Cryptographic flaws in Oracle Database authentication protocol” disclosed at the Ekoparty Conference.  Because of timing considerations (proximity to the release date of the October 2012 Critical Patch Update) and the need to extensively test the fixes for this vulnerability to ensure compatibility across the products stack, the fixes for this vulnerability were not released through a Security Alert, but instead mitigation instructions were provided prior to the release of the fixes in this Critical Patch Update in My Oracle Support Note 1492721.1.  Because of the severity of these vulnerabilities, Oracle recommends that this Critical Patch Update be installed as soon as possible. 

Another 26 vulnerabilities fixed in this Critical Patch Update affect Oracle Fusion Middleware.  The most severe of these Fusion Middleware vulnerabilities has received a CVSS Base Score of 10.0; it affects Oracle JRockit and is related to Java vulnerabilities fixed in the Critical Patch Update for Java SE. 
The Oracle Sun products suite gets 18 new security fixes with this Critical Patch Update.  Note also that Oracle MySQL has received 14 new security fixes; the most severe of these MySQL vulnerabilities has received a CVSS Base Score of 9.0. 


Today’s Critical Patch Update for Java SE provides 30 new security fixes.  The most severe CVSS Base Score for these Java SE vulnerabilities is 10.0 and this score affects 10 vulnerabilities.  As usual, Oracle reports the most severe CVSS Base Score, and these CVSS 10.0s assume that the user running a Java Applet or Java Web Start application has administrator privileges (as is typical on Windows XP). However, when the user does not run with administrator privileges (as is typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", typically lowering the CVSS Base Score to 7.5 denoting that the compromise does not extend to the underlying Operating System.  

Also, as is typical in the Critical Patch Update for Java SE, most of the vulnerabilities affect Java and Java FX client deployments only.  Only 2 of the Java SE vulnerabilities fixed in this Critical Patch Update affect client and server deployments of Java SE, and only one affects server deployments of JSSE.  This reflects the fact that Java running on servers operate in a more secure and controlled environment.  As discussed during a number of sessions at JavaOne, Oracle is considering security enhancements for Java in desktop and browser environments.  

Finally, note that the Critical Patch Update for Java SE is cumulative, in other words it includes all previously released security fixes, including the fix provided through Security Alert CVE-2012-4681, which was released on August 30, 2012. 

Related Post : Orc

Oct 17, 2012

Technology :: Apple's iPad Mini Brackets Microsoft's Surface In The Tablet Media War

So now we know the date of Apple’s next launch, Tuesday October 23rd, and everything is pointing towards the announcement of the iPad Mini. Of course many in the Cupertino-watching industry were expecting an announcement last week. Why go for a date later in the month?

First of all, the Occam’s Razor approach is simply that earlier in October wasn’t right for Apple and they needed another week or two to be ready. That could simply be a logistics issue, an area of the software or production that needed a few more days testing to sign off, or perhaps this the plan all along – to gain extra publicity from the earlier date and feed this into the increasing hype around the iPad Mini.

Be it co-incidence, contingency, or careful planning, the 23rd of October is a very useful date for the success of Apple’s diminutive tablet. It’s going to occupy a lot of column inches, covering the launch, reporting on the first few journalists who get review units, discussions over the pricing and the strategy, and that is going to take a lot of the oxygen out of the tech circles. Apple would gain this coverage any point in October, so this isn’t a case of the PR team making sure there are no competing stories. In fact it’s the complete opposite. Apple has rolled up a very loud truck onto Microsoft’s media lawn.

Microsoft will be debuting Windows 8 that week, with the initial release schedule for October 26th. It’s fair to say that Microsoft is putting a lot of effort into marketing and getting the word out – and with Cupertino’s invite, they’re not going to be the focus of the story. Neither will Windows Phone 8 get to be the focus on October the 29th. Even though a week will have passed since the iPad Mini launch, the recent coverage of the iPhone 5 shows that the coverage between the launch and the general availability of an Apple product remains very high.

While the clash with Windows 8 and Windows Phone 8 is intriguing, the biggest clash is going to be against Microsoft’s own hardware launch with the Surface RT Tablet.

More than the rise of the Android tablets (where success is either fragmented over a number of Operating System and UI versions, or restricted to Amazon), Microsoft’s Surface tablet poses a real danger to Apple. Surface will sit alongside Windows 8 and Windows Phone 8 as a cohesive system, it will integrate into existing businesses and IT systems; it’s fashionable, stylish, and looks the part; and like the iPad Mini we have no idea what the price will be!

The iPad Mini is going to sell in the millions. It’s unlikely that the Surface RT will match those numbers, but the challenge of the Surface to Apple’s tablet dominance is clear. Yes, it’s going to be a business first approach to sales and will need strong support from Windows 8 and Windows Phone to become established, but the potential is there.

Microsoft knows that this is the moment, potentially the last moment, to be influential in computing over the next decade, and are doing everything to make the October launches count. The industry, Apple included, know that as well.

In the days after Microsoft’s announcements, it’s likely the iPad Mini will go on sale, with record numbers, huge queues, and the digital column inches flooded once more with glowing reviews of the latest Apple product. Who’s going to give the Surface a fair crack over the home plate with that? The Surface will inevitably be drawn into a direct comparison with the iPad Mini, when they are products with different aims and target markets. Invariably when the Surface is measured against the iPad it won’t be a better iPad than the iPad.

Perhaps it’s a co-incidence. Perhaps that’s just how it all worked out for Apple in terms of shipping, stock management, and booking a venue.
But it’s still very useful timing.

Oct 16, 2012

Technology :: What’s New in VMware vCloud Networking and Security 5.1

With the release of vCloud Networking and Security 5.1 product, VMware brings the leading software defined networking and security solution that enhances operational efficiency, provides agility and is extensible to rapidly respond to business needs.
I just want to provide you some overview on how vCloud Networking and Security product brings the flexibility to the network and security aspects of the datacenter and point you to the resources where you can get more information.

There are different components of this solution. The first one addresses the networking challenge by providing a simpler approach of creating an abstracted logical network. In the vSphere infrastructure, you are already familiar with the process of creating virtual switches and associated port groups to build a virtual logical network. This process of creating virtual network is quick and easy because it is software defined. However, the virtual switch constructs are still dependent on the physical network configuration.

For example, if you create a new port group on a virtual switch to support a new application that needs isolation from other applications, you have to configure VLAN on the port group and also on the physical switches. So first, you need to work with the networking team before you can create this new port group and deploy application. This process might take days or weeks. With VDS + VXLAN, we create a new abstracted network, also called as an overlay network, that can be created or torn down with few clicks. Since this network is abstracted from the physical network topology, you don’t have to worry about re-configuring your physical network infrastructure. This allows administrators to provision isolated networks on-demand for their new applications or tenants.

The second component addresses the network services aspects. Once you create logical networks, you now would like to provide network services such as load balancers, DHCP services, Firewall, NAT services etc to the devices or workloads connected to these logical networks. The Edge and App virtual appliances will provide flexible on-demand network services to these logical networks.

The Third component addresses the extensibility of the solution through an open architecture with industry-standard APIs. This extensibility enables freedom of choice and avoids vendor lock-in. The solution allows third-party service insertion and thus organizations can easily take advantage of new technology, integrating operational workflows with existing systems and procedures. For example, you can deploy best of breed load balancing service from your vendor of choice. There are three different integration points – Within a virtual machine, at the edge of the virtual machine, and the edge of the virtual network.

Finally, the fourth and last component is the management and operation of this complete solution. VMware provides simplified management and operation through the advanced capabilities of VDS, where network administrators have access to familiar troubleshooting and monitoring features such as NetFlow, Port Mirroring, and SNMP MIBS. On the security front the APP and Edge Firewall are tightly integrated with vCenter Server Objects such as cluster, port groups, vAPP etc. This integration makes rule creation faster and less error prone than legacy approaches that require administrators to manually create and maintain IP address–based objects.

Oct 15, 2012

Technology : Through Widgets

The first advantage of widgets is that they are easy to install and use. You don’t have to be a software designer in order to install a text widget on your site or blog. The second advantage is that most widgets can be customized in size, color and style to fit your blog in an elegant way. In this post I”ll recommend a few content-related widgets, that add relevant content to your readers in addition to your own original content:
Twitter Search Widget – There is no doubt about the impact tweets have on the web. over 50M users tweet daily billions of tweets, why not share the relevant ones for you with your readers? Instead of adding a twitter widget of your own tweets, add the search widget and cutomise it to your blog topic.This example is a search widget on ‘Ipad’

The widget can be adjusted with theme, color, size and number of tweets shown. I would recommend using this widget with a search related to the topic of your blog, or a topic concerning current events related to your blog, and maybe even changing the topic of search on the widget from time to time, giving your readers a glimpse at the most relevant tweets.
TimesWidget – The New-York Times also has a great widget for your blog. NYTimes, the leading print and online newspaper lets you enjoy its RSS feed through a great widget(still beta).

This widget enables you to add the NYTimes feed customized to your topic. Pick the subject relevant(can pick more than one), sports, business, technology or others from the list and add it to your site, letting your readers have online access to fresh and relevant content right out of the NYTimes.
Google AJAX Search API for youtube- Not exactly a widget, but I found this cool API that could be suited for a few blogs. I can’t say every blog should have this, but blogs more on the fun side, or writing about media, can use this youtube API as a widget on their blog.

It can be placed in a horizontal or vertical layouts to suite your site. In this example I placed the AJAX with Monty Python’s Youtube Channel. As mentioned about the twitter search widget, here also recommended to change the channels every now and then.
There are many more widgets, these are just a few examples I found useful, and even eye catchers for readers. Don’t add too much to your site, but try to find a few widgets that could give value to your readers.



Oct 14, 2012

'Halo 4' Leaked, Microsoft Investigates [Updated]

Users on these and other forums have claimed they’ve already played the game which I have to say seems likely at this point.

Microsoft has commented on the possible leak, telling Joystiq: ”We have seen the reports of ‘Halo 4‘ content being propped on the Web and are working closely with our security teams and law enforcement to address the situation.”

Microsoft also confirmed the two discs, saying the second disc houses the multiplayer components and can be installed to a flash drive or the Xbox harddrive.


It’s early for a leak of a full-fledged, printed disc like this.
With the game due out in close to a month, however, it’s quite possible review copies of a major AAA title like this are already making their way to major gaming sites or at least have found their way to whoever is handling the game’s PR.

More to come if any other facts come to light. As of now, it’s a picture of a video game box posted to the internet. Take it with a grain of salt or two.
And if you’ve accidentally stumbled your way into a copy of the game, feel free to let us know in the comments.

5 Things that Should Never Go Into the Cloud (3)

A Single Copy of Anything

Cloud providers should and in almost all cases will perform due diligence when it comes to backing up information. In fact, many of the cloud providers have very sophisticated methods to duplicate information, not only within their datacenters, but across geographically dispersed datacenters. This means that your information could be located in several locations across the globe, so that if a single datacenter (or several) are taken down, your information will still be available.

Of course, it’s possible that the entire infrastructure of a particular cloud provider can be taken down, so that the entire system becomes unavailable. Of course, this is highly unlikely, since the infrastructures of the better cloud providers are designed for an exceptional level of availability. But it could happen. A more likely possibility is that the cloud provider goes out of business, is acquired, gives up on its cloud ventures, or is attacked by a disgruntled employee.

The key takeaway here is that, as with your on-premises datacenter, you don’t want to have only a single copy of anything. I often see people who should know better, leaving a single copy of important information in a cloud service provider’s systems. They think that the cloud provider has some kind of magic that will make sure that data is never lost, mostly because of the size and reputation of the provider. But I’m sure you’ve seen many reports in the media about email and other data that one cloud provider or another has lost and never was able to restore.

If you use a cloud provider, make sure that a copy of everything that’s stored in the cloud is also stored in your own datacenter. I guarantee that you’ll be able to bring your own systems online much faster than the cloud provider, who must service thousands of customers, can bring your information back online after a disaster.

Any Information that Must Always Be Accessible

In the United States, we have long had the concept of “dial tone” access. What we mean by this is that no matter what, you would always be able to get a dial tone so that you could make a call. The power might be out, but you would always be able to get a dial tone on your POTS (Plain Old Telephone System) line so that you can call for help. There are a number of historical, political, and regulatory reasons and background for the “dial tone” concept – but the bottom line is that dial tone meant that the telephone line was always available. The reality of this is changing with the advent of Voice over IP (VoIP), but the expectations remain the same.
Cloud computing, at its base, makes the (currently false) assumption that the Internet provides a similar dial tone experience. In order for cloud computing to work for a business, that business must always be connected to the Internet and the cloud provider must always be connected to the Internet, too. The cloud providers are more likely to always be connected to the Internet because they are generally pretty sophisticated when it comes to high availability for Internet connectivity and they also are going to have multiple and distributed points of presence. The problem with the lack of dial tone for Internet connectivity is for the businesses that need to connect to the cloud provider. Many businesses’ Internet connections are nowhere near “dial tone” quality.
If you have information that must always be available (for example, patient charts that include the patient’s drug allergies), you should never put that information in the cloud. It’s not a matter of whether the Internet connection is going to go down; it’s a question of when and for how long. If someone dies or is seriously injured or loses a large amount of money because the Internet was not available, that irate customer is not going to seek redress from your cloud provider – they are going to go after you. And as with the PII issue, it will do you no good to blame the cloud provider, as it will be assumed that, as part of your due diligence, you knew (or should have known) that the Internet connection was not “dial tone” quality and that a disconnected state would be inevitable at some time. For this reason, never put information that always needs to be accessible into the cloud.

The Importance of Network Redundancy

Now more than ever, today’s businesses require reliable network connectivity and access to corporate resources. Connections to and from business units, vendors and SOHOs are all equally important to keep the continuity when needed. Business runs all day, every day and even in off hours. Most companies run operations around the clock, seven days a week so it’s important to realize that to keep a solid business continuity strategy, redundancy technologies should be considered and/or implemented.
So, we need to keep things up and available all the time. This is sometimes referred to five nines (99.999) uptime. The small percentage of downtime is accounted for unforeseen incidents, or ‘scheduled maintenance’ and usually set to take place during times of least impact, like in the middle of the night, or on holiday weekends if planned. If this is not a part of your systems and network architecture it should be considered if you want to keep a high level of availability. Because things break and unforeseen events do take place, we need to evaluate the need for creating an architecture that is ‘highly available’, or up as much as possible, with failures foreseen ahead of time and the only downtime, is to do planned maintenance.
To keep the company’s workforces, and their customers connected and operating, we need to plan for it. With servers you can cluster and with web properties and applications, you can load balance. Almost every vendor today puts out a line of products to facilitate this need via hardware, software and now via visualization design.
In this article we will take an in depth view of current technology and strategies used to create redundancy in your WAN and how to properly design, implement, monitor and test in case of any disaster that may occur. We will also briefly cover other redundancy options for servers and other network architecture, power and applications.

The Importance of Redundancy

Today’s networks are high-tech and most times high speed. Common to most Wide Area Network (WAN) designs is the need for a backup to take over in case of any type of failure to your main link. A simple scenario would be if you had a single T1 connection from your core site to each remote office or branch office you connect with. What if that link went down? How would you continue your operations if it did? In this section we will explore this scenario and other scenarios to help you design and plan for a backup solution that you can count on and one that is cost effective and will not break the bank.

Network redundancy is a simple concept to understand. If you have a single point of failure and it fails you, then you have nothing to rely on. If you put in a secondary (or tertiary) method of access, then when the main connection goes down, you will have a way to connect to resources and keep the business operational.
The first step in creating network redundancy (particularly in the WAN), is to set up a project plan that will allow you to scrutinize the current architecture/infrastructure, plan for a way to make it redundant, plan for a way to deploy it and then set up a way to test it. Nothing should be thought of as ‘complete’ until you have tested everything for operational success. Your final step will be putting in policy and processes that allow you to monitor it and be alerted when things do fail so you can take action. Commonly a company’s security policy, disaster recovery plan, business continuity plan and/or incident response plan will leave room for this type of solution.

Testing however is the key to your success. This is not a ‘set it and forget it’ design. The main link failing means that the backup should take over automatically if that is how you designed it. There can be one of multiple issues that may not be self-repairable, or resolvable without interaction so if not set automatically, you will need an incident response plan to account for it. You should also have a follow-up procedure regardless of automatic or manual. This means, when implementing redundancy into your systems or network, you need to take action immediately regardless, even if your operations continue to take place to verify that everything did go as planned. If not, then you need an after-action report where you can specify how things will be fixed, or redesigned – then retested.

Analysis is critical to building a good redundancy plan. Almost every network created is unique in some way. This is why you must analyze and take note of not only the common items that would require redundancy, but also other solutions in place that you may not have considered such as mainframe access as an example.
First, a risk analysis assessment must take place. Next, the core site (or core sites) must be taken into consideration if that is where the bulk of your resources are located, or where the majority of your business connections terminate. Routing and routing protocols need to be considered. Solutions exist (such as when using Cisco Systems devices and software) where specific protocols can be used to handle the failover process for you if implemented correctly. Load balancers, failover solutions and protocols are available to facilitate just about any redundancy option you can imagine.

Note:You should always consider the hardware. For servers and network devices, redundant hot-swappable power supplies and drives (as well as other components) are used to keep everything up and running when a disaster occurs. Also, disk drives can be deployed in a way where the data is spread across multiple drives such as when using RAID, so that if a drive does fail, the data is not lost. Data should also be saved (backed up) in multiple locations to provide restore redundancy options. Having off-site tape storage is one such example, as well as to have data replicated across multiple hosts using technologies such as Windows DFS. 

Tip:The Local Area Network (LAN) must be examined for single points of failure as well. If your LAN only uses one switch and both routers are connected to it, if the switch fails, so does the LAN, as well as access to the WAN. 

Once the design phase is complete, a cost analysis session must be completed as well. Competing business rivals looking for your business will hope that you budget, plan and design this solution incorrectly. Creating redundancy is 90% of the design only leaves you with a possible 10% failure scenario. This means that you spent a lot of time, money and resources putting a solution in place that still has a single point of failure and if this single point creates a significant amount of downtime, then you spent all that time and energy leaving yourself vulnerable to failure regardless. A good management team will ensure that nothing is left open so that the investment is sound and does what it says it will – keep your operations running.
Considering your current network design and architecture is critical (as mentioned earlier). A simplified view of a company network may be a core site location (perhaps the company headquarters) where all of the servers, systems, applications and main infrastructure reside. In this article we will call this ‘core infrastructure’.

Picture 1, shows the layout of an extremely common design, where a branch office needs to connect to a core site where centralized resources are located such as financial applications, enterprise resource planning (ERP) software, databases, file server data and so on. 


Picture 1. A common WAN connection scenario with redundancy in place

Here you can see how the remote branch office connects to the core site. Here, there is a dedicated MPLS circuit/link that provides bandwidth at approximately 1.5 Mbps which is the connection speed of a T1. The MPLS routers are connected to the network via a network switch. Commonly, the router is the networks default gateway, where all packets are sent that are not found locally. The router needs to make a routing decision and since the main link is up, decides to send via the MPLS connection. When the main link drops, commonly the alternate link is used if set up correctly. This provides your remote site with a new path to reach the resources needed to continue operations.

Back about 10-15 years ago, the technology that connected most corporate sites were Frame Relay and Integrated Services Digital Network (ISDN). Although ISDN is still used in some fashion, Frame Relay has gone by the wayside being replaced by Layer 3 enabled TCP/IP-based high speed networks such as MPLS which we will cover momentarily. The company core and remote sites are normally connected via some form of WAN-based technology, such as OC3’s, T3’s, T1’s, DSL, MPLS and many others. The most common forms today are Multiprotocol Label Switching, or MPLS (private) and some form of Internet connected link (public, or shared) connected via an encrypted tunnel called a Virtual Private Network (VPN).

VPN’s are similar to those you may connect with via your laptop when connecting to work from home. The same technology is used. Since you are connecting over the unprotected public Internet, encryption must be used to secure the transmission of data and communication. Routers and firewalls if ordered with specific VPN functionality installed and enabled can connect your remote sites to a core site in the same manner, encrypting the data you send over the public Internet via what is called a ‘tunnel’. This is often used because it’s cheaper than a managed, dedicated and private MPLS connection and normally lays dormant until used. It’s also secure. Since a SLA is purchased on the main link, the provider needs to get this back up and running quickly so you won’t need to be dependent on the alternate link for long.

MPLS by nature is a redundant network. It is commonly managed by an ISP that provides you a connection to it, or a router that they will manage as well. When data is sent from one site to another, it’s passed into a ‘cloud’ which is a private network that the ISP manages. Packets enter the cloud and into the ISP’s meshed network where redundancy is in place. Although highly available internally, if the router that connects your remote site fails, or the link fails… your site will be down regardless unless you have a redundant router and link there to pick up in the failed routers place.

ISP’s will sometimes offer Internet access via their MPLS network as well. It is not recommended that you use this particular Internet access to connect your VPN tunnels for backup, because it could be potentially connecting to the same network that failed you in the first place.

So, all that remains to answer is one last question – what if both links drop? You should make sure that you have a way into your remote sites via Out of Band (OOB) management, such as a modem connection to a serial port of a connecting router. This gives the chance to test internally to the site (from the local area network [LAN] in times of complete cut off. The modem can be connected to a serial port on a router, or on a server on the LAN. As long as a dial-up line is configured and you can dial into it and connect, you can access your  Be sure to secure everything correctly, do not allow someone to war-dial into your network via the modem, which is fairly easy to do if not locked down.

Tip:
You should always create an alternate link in to your remote sites when both links are down. This can be done with an extremely cost-effective (cheap) dial up solution via a POTS line. You can connect a modem to the router, a server or laptop and have LAN access to troubleshoot or provide access out if needed.

There are companies that are more complex, for example if a company’s strategy is to merge and acquire other companies, there may be core infrastructure located in multiple core networks with mission-critical resources. These companies should consider a strategy to simplify the layout and consolidate and relocate to a centralized (and protected) location, but this may not always be the case. In these situations, achieving a redundant network backup plan can be difficult. Sometimes it’s set up like that by design, for example, if you had a core/central headquarters site where some of your resources are located, and others placed in a co-located data center. Using resources outside your network and managed by others (or outsourced) is considered ‘cloud computing’ and SaaS, or ‘software as a service’. Also, you need to consider the differences of keeping services in house or using outsourcing solutions. If your architecture is outsourced partially or completely – you must thoroughly examine the provider’s policy, Service Level Agreement (SLA) and polices for conducting work.

You should also always consider the LAN connections of your routers and the LAN’s default gateway assignment. For better design, create a bullet-proof (redundant) LAN by installing two switches and making them the default gateway. This way, if the router fails completely (such as looses power), the default gateway address is still intact, and the switch can make the decision, not the router. See figure 2 for a redundant LAN connection where routers are connected and the switches make the routing decisions in the case where the router with the default gateway assignment is at risk.


Tip:It is recommended that you purchase hardware with multiple hot-swappable redundant power supplies, and make sure that the power source is on a generator, has a backup, and/or is phase correctly so you can survive a power lose entirely. Enterprise UPS systems, backup generators and so on can be used to provide alternate and redundant power solutions when needed. 

Note:
Ordering lines from an ISP or vendor can take some time so make sure you size them correctly and then order them ahead of time which can save you time with the deployment of your redundancy solution.
Geographical considerations should also be considered. Consider the Point of Presence (POP). If you use one provider and they have internal problems on their network, or a disaster occurs (such as a hurricane) that affects the area, then you will be at the mercy of the provider no matter how many lines you have in place. Disaster recovery solutions account for this, especially if you are using a co-located data center, such as a cold/warm or hot site. 

In sum, a simple rule of thumb when designing and planning for redundancy is, do the job right… do not cut corners to save money because if you start to spend capital on redundancy and still leave a single point of failure in place (like a single switch), a small cost could jeopardize the entire solution which did in fact cost you a lot to implement. Lines, routers, human resources to implement it and daily monitoring cost money. It would be silly to leave a simple single point of failure in place when you spent time and money on the rest of the project. Consider doing the project correctly by designing it, budgeting for it and getting the right people in place (or trained) to deploy, manage and monitor it correctly… and lastly, test it for accuracy. Consider everything and leave no stone unturned.

After you have designed and planned your solution, you need to consider a few things during deployment. If in-house solutions are used, your employees need to know how to handle an incident. If out-sourced, then your vendor/provider needs to have a plan in place to add, manage, monitor and then recover from a disaster. Test plans should also be considered in both scenarios.
You should also consider a more complex design when considering how to add and deploy your solution.


Here you have multiple remote sites connecting to a core (or multiple core) locations. Redundant links provide an alternate solution to main link failure on your WAN and pro vide remote site access to core resources.
You should also consider failover technologies. Must like server clustering, network equipment can also failover to other devices (such as routers, switches, firewalls, etc.) if configured to do so. For example, a Cisco router can be configured with Hot Standby Router Protocol (HSRP). Servers can be clustered and load balanced for any failover scenario you can think of. With Windows Hyper-V, VMware and Citrix virtualization solutions, you can create a design where any failure can be dealt with automatically keeping operations running in the case of any disaster.

Tip:It’s important for VPN clients to be able to access corporate resources as well, whether on the road or in a home-based office. That being said, you should consider reviewing your VPN concentrator redundancy as well. If you need two core based units for this strategy, make sure that they also know how to fail over from one to the other in time of need.
Once you have considered your design after full analysis and implemented it, you need to test it thoroughly and then document the procedures. After that, you need to continue to test and update the documentation especially as new technologies are added to your architecture, or as your architecture grows such as adding new sites.
Now that we have talked about the importance of redundancy, especially in the network… you need to test it. 

Test Plan – put together a plan that covers the details of a failover scenario so you can test for it. To do this, detail all of the architecture to be tested and failover manually to see how the solution works. Make sure you test applications, routing paths, time/speed/bandwidth usage and accuracy. Test for fail-back as well and how the main link when it becomes available can resume the role of the primary link.

Network Monitoring – your network monitoring solution should help you become aware of a main link failure. Using technologies such as ICMP and SNMP, you can continuously monitor your uptime and be alerted when there is a change in any device, link or solution.

Disaster Recovery Plan – a plan should in place, and if not, added immediately. This plan should outline theBusiness Continuity Planning and Incident Response Planning are part of this DR plan. How the business will continue to operate, as well as who will react to the problem and follow it through until normal operations are restored.











5 Things that Should Never Go Into the Cloud (2)

Identity Management Systems

Your identity management systems enable you to confirm that when a person claims to be a someone, he/she is actually that person. If you’re using Active Directory, then the Active Directory database is part of your identify management system. You might also be using smart cards, biometrics, or one-time passwords as well, as part of a multi-factor authentication system. And you are most likely hosting your identify management systems in-house.

Your identity management system, although not as sexy or cool as some technologies, is the life’s blood of your organization’s security. If the integrity of your identify management system is compromised, everything in your organization is “up for grabs”- and I do mean everything. The entity that compromises your identity management system will be able to claim the identity of anyone in your organization and carry on a wide range of activities under the guise of the person whose identity has been compromised. If that person happens to have administrative privileges, you’re in deep trouble. From the point in time when the identity management system is compromised to the time when incident response is completed, all user activities during that interim must be considered suspect and any information that was touched, as well as any activities carried out on the corporate systems, must be considered to be invalid until an audit is completed.

Are there identity management systems in the cloud now? Sure. Facebook, Windows Live, Google, and Yahoo are just a few, and there are many other smaller players. The big question is: Do you trust these entities and the security of their identity management systems? How many times have you heard about some compromise of each of these providers’ identity management systems that ended up with user names and passwords of accounts being compromised? Given the critical nature of identity management to all of your business processes, you should be very wary of trusting identity management to the cloud.

Core Intellectual Property


When you consider storing critical data in the cloud, there are a number of questions you need to ask:
  • How does the cloud provider secure your data?
  • Do they use NTFS?
  • Do they use EFS?
  • Do they use some other method of encrypting information while it’s on the disk?
  • What about information existing in memory on the servers? Is there a way to compromise the data while in memory?
  • If a machine crashes, does it dump memory contents to disk which can be retrieved by an attacker?
  • How do they protect the information when it’s in transit between your clients and their servers? Are they using SSL? TLS? IPsec? Some other encryption protocol? Can an attacker located between you and where your core intellectual property is stored intercept that information “on the wire” and replay the sessions and gain knowledge of the contents of the communication?
  • Is the data itself secured? What if an authorized user gains access to core intellectual property and then decides that he wants to derail the company by sending that data to a competitor? Does the cloud provider enable rights management for all information stored in the cloud?
Unlike your intranet, where you are using IPsec, TLS, NTLS, EFS, BitLocker, and Rights Management Services, you may not know whether all of these security features are available when information is hosted by a cloud provider. There are too many vectors of attack for any data stored in the cloud, which makes it a less than ideal place to store any core intellectual property.  After all, compromise of core intellectual property can put you out of business.

Customers’ Personally Identifiable Information

Many of the regulations you may have to deal with, depending on your industry, relate to protection of personally identifying information (PII) of your partners and your customers. There can be some significant negative consequences in the event that someone gets hold of your customers’ private information. This data could be something as simple as the customer’s name, or something as dangerous as compromise of a customer’s social security number or credit card numbers.

This can be challenging. For example, let’s say you provide products or services that can be purchased online. It’s clear that, by the very nature of online sales, customers are going to have to interact with a cloud service to participate in the transaction. In this context, the important distinction is whether it’s your own cloud or someone else’s cloud that is storing this information.

If it’s your cloud, then you have tight command and control over what PII is obtained, what PII is stored, and the lifetime of the PII that is stored in an Internet accessible location. If it’s a cloud provider, you have to ask yourself what they’re doing to secure your customers’ and partners’ PII. Do they have a published policy? If there is a compromise, is there any kind of indemnification? What if you are fined or sued because of mishandling of PII? Does the cloud provider pay the fine, or are you left on the hook for the whole thing? What about damage to your firm’s brand equity? Is there anything the cloud provider can do about that? And does it really help for you to blame your cloud provider?

This is why I believe PII should remain in-house. When something goes wrong, it doesn’t matter whose “fault” it is; all the fingers are going to be pointed at you, so you should make sure that you do everything you can to ensure that PII is protected. When you have the control, you can do everything possible to keep PII safe; if you give it over to the cloud provider, you are limited in what you can do to protect PII.







5 Things that Should Never Go Into the Cloud (1)

Unless you’ve had your head in the sand (or in the clouds?) for the last few years, you’ve been hearing a lot about cloud computing. The “public cloud” is the name given to a collection of servers and services that are hosted in data centers that don’t belong to you and aren’t on your premises. The “cloud provider” who owns and controls the servers can provide a number of services, typically divided into the following categories:
  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)
IaaS, PaaS and SaaS have their own distinct advantages and disadvantages.

Note:There are many other definitions and descriptions of cloud computing. The “private cloud” is a term that refers to company-owned, on-premise datacenters that use the same technologies (such as virtualization) that cloud providers use. In this article, we will be talking about the public cloud.

The most common vision of cloud computing is that it provides an on-demand, elastic computing resource that can be provisioned and de-provisioned automatically to meet the need of the consumer of cloud computing services; the end result is that the purchaser of cloud computing services receives a “metered service” and only pays for what is used.
Sounds pretty good, eh? Think about it. Your company isn’t an IT company, and your core competency isn’t in IT or information services. Why are you maintaining your own datacenters? Wouldn’t it be better to outsource the management of the IT services you need to run your company instead of trying to maintain them yourself? If you outsource your computing services, you can move the dollars you spend for capital expenditures to operational expenses, and this smooths out your balance sheets, gives you a more predictable cash-flow pattern, and you don’t have to make a big capital outlay for what quickly becomes “yesterday’s technology.”

This is why so many people have their heads in the cloud these days. They claim that IT is “growing up” in the same way that public utilities have evolved over the years. It’s more cost effective and more reliable for a central power company to manage the delivery of electricity to cities, compared to having each home maintain its own generator. The same goes for water and gas utilities. Why maintain your own propane tank and water well when the utility companies have the expertise and financial resources to provide a highly available, world class service?

With “growing up,” however, comes growing pains. There is still a lot of distrust of the cloud in the business world, and for good reason. Probably everyone who’s reading this article has lost some important information at some point in time, because you trusted some online service to store your data and keep it always available. In our family, one of the worst experiences we had with this related to MSN music. When MSN music went away, all the songs that Tom and I had bought from the service over the years were no longer playable on any machine other than the three that were authorized – and there’s no way to unlock these songs, so as these machines got old and died, that music became completely unavailable.

Thus, while there are some great things that the cloud can do for you, there are also some things that you need to be very careful about when you start thinking about a cloud strategy for your organization. In this article, we’ll look at five things that I consider to be too important to trust to the cloud, at least as it exists today.

Oct 13, 2012

Technology :: Transparent Networking Comes to Financial Sites

The Web 2.0 revolution is reaching the financial sector. It’s not only forums and chat rooms, and not another Twitter application for stocks. This time it’s about revealing trades in real time. One sector is standing out: forex. Trading currencies has been around for a few decades, but it was limited to big financial institutions. The internet, among other things, brought this niche of financial investment (or speculation if you wish) to the masses.

There’s lots of innovation around forex trading platforms, so this served as good ground for networking sites as well. In 2009 and 2010, these networking sites went deeper – deep into the pockets of investors. Similar to applications such as Google Latitude that expose your geographical location, these sites expose data. But it’s not only forex. Let’s see some examples:
  1. MT4Pips: These are lots of weird names in Web 2.0, and this name is not only Web 2.0-ish, but uses the forex jargon. MT is MetaTrader, the leading software in this field. Using this site, traders that use this specific software can share their real live trades between each other. This site comes from a company that already made a Digg-like site for forex, has a forums site and more social sites in this field.  The options are somewhat basic, and are limited to this specific platform, but are a great and neat start. Here’s a deeper review of this site.
  2. PT Multistation: This is a software that enables traders to use multiple brokers at once. They recently added social features that allow traders (in foreign exchange, stocks and others) to chat and also to share real trades. This tool already allows multiple brokers and goes beyond the niche of forex, but the social features still require improvement, according to this review. Also here, a good direction that still needs polishing.
  3. Currensee: Focusing only on the forex sector, this site enables sharing of real time trades and has tools for aggregating the numbers and showing the big picture. While this isn’t limited to a specific software, not all the brokers in the industry are available there. They are adding more and more brokers, and improving the design, so this will probably emerge as the leading tool for social networking in forex. But we guess that most of you aren’t investing in this specific niche but rather in stocks. The last options supports more sectors.
All in all, we have nice starts here with each one taking a different direction. We hope that this trend of transparency will reach stock sites as well. Do you know of any sites like these? We’d love to hear.

Promote Your Blog through Networking, How?



By networking I mean doing all of those things that I regularly write about here at ProBlogger. Commenting on others blogs, answering comments that others leave on yours, emailing other bloggers when you write something that you think will interest them, making helpful suggestions to other bloggers, connecting with people via social networking sites like Facebook, LinkedIn, MySpace, emailing people to introduce yourself, linking up to others in your niche…. the list could go on… and on…. when it comes to ways to network but today I’d like to put forward a few more general suggestions.
A number of suggestions that I’d make in networking with bloggers:
  • Prove Yourself First – if you’re brand new to your niche it could take time to make an impression. This isn’t necessarily because people are being cliquey – it’s often because they’re waiting to see if you’re going to stick with it and if you know what you’re talking about. There’s nothing more frustrating that networking with someone who disappears a couple of weeks later. Show you’re in it for the long haul and that your blog is making a contribution to the niche and you’ll find people more willing to connect.
 
  • Persist But Don’t Annoy – some bloggers will take a few emails or conversations before they’ll warm up to you. There’s a lot of noise around the blogosphere so don’t be offended if people don’t respond – try again in a little while – but don’t stalk them :-)
 
  • Be generous – a lot of the networking that I see going on between bloggers is fairly much about ‘taking’ rather than ‘giving’. One way to make a real impression on another person is to be generous with them. Help them achieve their goals – highlight their best work – encourage them – go out of your way to work on their terms. While you do need to have good boundaries (otherwise people will abuse your generosity) I think a spirit of generosity is the right attitude to go into networking with.
 
  • Have an Elevator Pitch – a lot has been written about business people being able to articulate what they do in a concise statement (having your elevator pitch). I think being able to do this is important with blog networking too. I get many emails every day from people wanting tow work together in some way and in many cases it’s a few minutes into an email that I even work out who they are and what they are on about. Develop a few key sentences that describe who you are, what you do and what you offer others. Another good elevator pitch is on what your blog is about. Having thought through these things will help others understand what you can bring to a relationship – but they will also help you understand that too.
 
  • Look for Points of Synergy – perhaps this says more about my personality type, but I’ve found the most profitable relationships to be ones where there was a ‘spark’ or ‘energy’ around our interaction – particularly where there was some sort of synergy around goals and objectives but also some sort of a connection when it comes to personality. My style has always been to look for points of ‘energy’ or ‘synergy’ and going with them. Perhaps someone else has a more technical description of this but it’s worked well for me.
 
  • Don’t Expect too much too quick – the most fruitful relationships that I’ve been a part of in blogging have emerged over time. Let the relationship grow naturally as you build trust and a mutual understanding of who the other person is and how you can work together.
 
  • Look for the B-listers – many so called ‘A-lister’ bloggers are approached all day long with requests to connect. While you might get lucky – I’ve found that approaching slightly less know blogs can have more chance of working out (and they can still drive a lot of traffic).
 
  • Look in Neighboring Niches – it is important with blog networking to interact with other bloggers in your own niche – however don’t close yourself to relationships with bloggers outside of your niche – particularly in those that neighbor yours. When you limit yourself just to other bloggers exactly like yours you will end up dealing mainly with people who could see you as a direct competitor. While some will be open to interacting with you I’ve found networking with people outside my niche can be fruitful. Another way to be strategic is to not look for networking opportunities just with other bloggers on your topic – but with bloggers who share a similar demographic of reader.
 
  • Become a Go-To Person and a Connectoras you network with others don’t just focus upon you and the other person – but attempt to draw others into the relationships you have. I find that people are particularly grateful to me when I can’t help them but point them to someone else who can. This creates a good impression upon both of the parties that you connect which can lead them to come to you again with opportunities (ie you become the ‘go to’ person because they know you’ll either help them personally or point them to someone who can).
 
  • Ask Questions – one key that I’ve found to work in networking is to ask a lot of questions of those around you. Some bloggers go into networking with obvious agendas and goals but fail to listen to the other party. When you become a person who asks others about their goals and objectives, where you know what their strengths and weaknesses are and where you know their dreams you not only create a good impression on them but you’ll be in a great position to know where your situation aligns with another person’s – this is where networking becomes most effective.
Looking forward to hearing more about your own experience of blog networking and how it’s helped your blogging grow.

Oct 9, 2012

Information Technology :: How to Avoid Spam Filters with PHP mail() Emails


Just about everyone who uses PHP has encountered the popular PHP mail() function which enables email to be sent from a server. This function is preferred to other methods of sending email, such as sending mail with SMTP Authentication, because its implementation is quick and easy. Unfortunately, when using the mail() function, your emails are more likely to be marked as spam. So how can we fix this?


A Simple Implementation Example

Many users of the mail() function often have simple implementations as shown in the code sample below:

mail("recipient@recipient.com", "Message", "A simple message.", "From: The Sender ");
?>
While this implementation will successfully send an email, the email will probably get caught in the recipient’s spam filter. Fortunately, there are some simple fixes that can help you avoid spam filters.


4 Ways To Make Your PHP mail() Emails Less Spammy
1. Use Headers
In the simple example above, the from name and email address was added as the fourth parameter. Instead, consider using headers to set your From and Reply-To email addresses.

  $headers .= "Reply-To: The Sender \r\n";
  $headers .= "Return-Path: The Sender \r\n";
  $headers .= "From: The Sender \r\n";
?>
But headers  are good for more than just setting details about the sender. They are also important for setting the content type, the email priority, and more. Here are how some additional headers look.

  $headers .= "Organization: Sender Organization\r\n";
  $headers .= "MIME-Version: 1.0\r\n";
  $headers .= "Content-type: text/plain; charset=iso-8859-1\r\n";
  $headers .= "X-Priority: 3\r\n";
  $headers .= "X-Mailer: PHP". phpversion() ."\r\n"
?>
Be sure to replace the fourth parameter with the $headers variable as shown below.

mail("recipient@recipient.com", "Message", "A simple message.", $headers);
?>


2. The Message Sender Domain and Server Domain Should Match
Spammers are notorious for sending emails from one server and trying to make the recipient believe that it came from somewhere else. So if you are sending an email from example@example.com, it is a good idea the the script reside on example.com.


3. Be Sure to Properly Use the Content-type Attribute
The Content-type attribute enables a message sender to say whether or not an email is plain text  or html, or whether it has attachments. Obviously, the easiest to use content type is text/plain. You just add your text as shown in the simple example, and you are done. But when you use the other content types, additional pieces might be expected. For example, with the text/html content type, an html body tag is expected. Not having this tag could result in your email being marked as spam.


4. Verify That Your Server Is Not Blacklisted
When a server is blacklisted, it means that that server has identified as one that has been sending a lot of spam. This results in recipient mail servers rejecting or filtering any mail that is received from that server.
So if your mail is not being received it is a good idea to verify that your server has not been blacklisted. This goes for both shared and dedicated servers. In a shared environment, it is common for other users on the server to be sending out spam. And in a dedicated environment, spammers may have found a way to exploit a vulnerability in a server or contact form to send out spam. So it is easy for either type of server to be blacklisted.


Alright, now that you have the basics on avoiding spam filters, reconstruct your scripts and happy emailing!

IT Conversations

Moneycontrol Latest News

Latest new pages on Computer Hope

Latest from Infoworld

Door Lock

Door Lock Import Top Door Lock from China Contact Quality Manufacturers Now