Insights from TCS Innovation Forum 2010 Theme: Infrastructure Simplification

by user

Category: Documents





Insights from TCS Innovation Forum 2010 Theme: Infrastructure Simplification
White Paper
Insights from TCS Innovation
Forum 2010
Theme: Infrastructure
and Transformation
Most CIOs inherit a painful legacy. For instance, do you
know how many servers, operating systems, platforms and
apps constitute your IT infrastructure? How soon can you
perform an impact analysis if you want to introduce one
more technology? Are your systems duplicating effort,
inefficient and difficult to use? Not just you, or your CIO,
but very few IT heads or CIOs would have clear answers to
any of these questions.
Ageing IT infrastructures that have become unwieldy and
brittle are among the top concerns for CIOs in every
industry. Getting IT infrastructure to meet dynamic
business needs with agility, efficiency and security is a
challenge. So how are CIOs tackling this? And what are
some new technologies and tools used to transform
complex infrastructures? This is what TCS Innovation Forum
2010 (US edition - Cambridge, MA and UK edition London) set out to explore. The panelists and participants
in this forum were from diverse areas of the IT innovation
landscape. Many new perspectives and approaches to the
issue emerged.
We present here nuggets from the knowledge and
experience experts shared around the theme of
simplification. We are sure that these will arm you in
handling the complexity challenge.
K Ananth Krishnan , CTO, Tata Consultancy Services
Narayanan Krishnakumar, VP and Chief IT Architect, EMC
Sanjit Biswa, Co-founder and CEO, Meraki
Mark Jaffe, CEO, Prelert
Doug Saucier, VP of Enterprise Architecture, Avery
Professor Harrick Vin, VP and Head of Systems Research, TCS
Innovation Labs, TRDDC, TCS Innovation Forum – UK
Panelists :
Buddy Willard, Group CIO, Veolia Water
Michael Bischoff, IT Director , London Clearing House
Jim DeHaven, DC Sales - Europe, Cisco
About TCS Innovation Forum
TCS holds this premier, by-invite-only event annually in UK, USA
and Asia. It is held in the hub of innovation in each of these
geographies and attracts thought leaders who are working along
key innovation themes that challenge industry andsociety.
This year’s focus areas were Infrastructure Simplification, Cloud
Computing, Social Computing and Green IT. CIOs from various
domains and COIN partners and senior technologists presented
their hands-on experiences around these areas in their
TCS Innovation is focused on keeping customers ahead of the
curve. In doing this, it believes that no one organization can do it
alone. TCS’ Co-innovation Network(COIN)™ is a collaborative
network, connecting several entities in the innovation ecosystem.
COINcomes alive in TCS’ annual innovation forums. The format is
interactive and the brainstorming sessions offer true take-aways
around new technologies
For enquiries regarding TCS Innovation Forum mail us at:
[email protected]
Table of Contents
The Context
The Challenges
Strategies and Solutions
The Achievements
Some Takeaways
The Context
Dr Harrick Vin (VP& Head Systems Research) set the context at the panel discussions by
introducing a key research theme in TCS -Infrastructure Simplification and Transformation.
Harrick: All of us really want to build an IT environment that is simple, easy to manage, easy to
understand and highly efficient. Unfortunately, reality is somewhat different. In most enterprises, IT
environments have become extremely large, complex and heterogeneousto the point that no one really
understands it completely.
A representative data set of a top tier bank in US (Figure 1.) has more than 30,000 servers, more than
20,000 databases, more than 200,000 desktops, 35 different programming languages and 25 plus middle
ware environments.
Top-tier bank in the US
20+ vers
8+ OSw/
base pro
10+ data
server in
leware en
25+ midd
mming la
ytes of sto
9+ petab
e-mail bo
4000+ busniess
5+ years
Average age: 3.
10-20 years: 19
20+ year
Significant over
ing apps
26+ accoun
ion apps
10+ authenticat
ion engines
12+ credit de
15+ BI tools,...
Figure 1: Datacenter of a top tier bank
For every programming language, every environment imaginable is present. In the case of applications,
the scenario is similar: there are more than 4000 business applications, from 6 months old to 20 years old,
running on different platforms. Many, many versions of the same application running in the
environment, partly because of mergers and acquisitions and partly because of different geographic
regions have grown at different speeds. This is the level of complexity IT Infrastructures have reached
At the same time, there are three IT trends that business is watching:
i) Dramatic changes in technology and technology platforms: over the last few years, network
bandwidth has grown 2.7 times; processor capacity has become roughly 16 times cheaper and the
storage cost has dropped 10 times.
ii) Easy provisioning of technology for use within an enterprise: we are going from a model where
we had dedicated static provisioning of application resources and where business services were
provisioned as monoliths to an environment where you share resources, acquire resources on the fly
through virtualization, more component level software designs, SOA and a highly dynamic and on
demand provisioning.
iii) The move from very highly capital intensive environments to more flexible contracts leveraging
third party data centers; moving from a capex model to an opex model, to eventually an environment
where everything is a service.
In this context, we as CIOs and IT heads are trying to simplify the complexity in our IT infrastructure
without compromising on agility, security or delivering value to business needs. This forum brings
together panelists who have worked towards simplification in different areas, using different methods.
First, we present some of the complexity challenges each panelist has had to deal with within his/her
organization. You, as IT experts, are sure to identify with many of these.
The Challenges
Narayan Krishnakumar (EMC): Acquisitions continue to be our major mechanism of growth and we
have done about 50 acquisitions in the last decade or so. We have the same kinds of challenges:
globalization, application explosion, TCL security, complexity of everything, aging datacenters and
storage growth
as well.
We have 48,000 internal users, 400,000 plus customers and partners and our IT mine has five datacenters
and 7 terabytes of storage, about 400 plus applications. The spaghetti picture (Figure 2) is very much true
to us.
Even though we are part of a large infrastructure hi-tech company, our IT infrastructure is as complex as
everybody else’s.
Figure 2: Scale and Complexity of relationships in a datacenter
Mark Jaffe (founder and CEO of Prelert): Simplification is not always as simple as we might think.
Often, when we make one thing simpler, we make something else much more complex. For instance, by
simplifying the provisioning of new applications, (which has been a major IT focus in the last couple of
years), I believe we have only added to the complexity of the ongoing management of those same
services. While technologies such as load balancing & virtualization provide the agility required to
quickly deploy new services without additional hardware, managing and servicing of those applications
has become extremely difficult. This is because the new technologies that enable agility also obscure
how the underlying layers of technology are operating and how they work together. Therefore we have
troubles understanding what the impact of the change will be in these environments and isolating the
cause of performance abnormalities has become extremely difficult.
Sanjit Biswas (Meraki): At Meraki, we look at networks. If you rewind 15 years, the network was pretty
well understood; you were connecting to a bunch of servers and few desktops and you had some policies
that you wanted to enforce and you were able to design a network in advance for those needs. I see lots
of iPads today. Many devices and different applications are being deployed. Management is proving to be
a huge problem because you need to do different things in the network.
Buddy Willard, (Group CIO, Veolia Water): My team wants to always say yes (to business).
Unfortunately, the problem with saying yes is the cost and complexity of supporting new things. The
team doesn’t traditionally worry about the old solutions and they do not think about how to merge the
new technologies with the old technologies. The downside to this is that the old services get forgotten
and fall into disrepair when the business recognizes and rewards my team for delivering exactly what has
been asked of them. Unfortunately, we are hurting the business by saying yes all the time. In the long run,
services can only improve with an IT rationalization program.
Michael Bischoff, (IT Director, London Clearing House): We looked very hard at our operating cost line
and recognized that frankly we had been using our data center as kind of expensive warehouse. When
we started (simplification), we had five different operating systems, five different RDBMS, four different
hardware infrastructure platforms, (fortunately only one network provider) and a vast range of
applications built on two different languages. Our demand for datacenter space (as a proxy for power)
was growing 30% per annum, so in every two to three years, we had to double the size of our data centers
and that was not sustainable from either a Green perspective or from an operating cost perspective.
We put whatever our business wanted us to put into that warehouse, whenever they wanted it and with
whatever technology. This meant that we had a highly heterogeneous environment that we were
operating and managing.
We run 7% utilization in my datacenters. If we ran a construction or a manufacturing concern with the
asset utilization we have in our datacenters, we would go bust.
Jim DeHaven, (DC Sales - Europe, Cisco): I came over from an acquisition and my life has always been in
a disrupted stage!
In the last five years, at CISCO, we had been focused on routing and switching, very effective routing and
switching. We now see that we created technology strategies for the next five years. But the crucial
question were: What is the direction of technology ? Who is the customer? Where is the market going?
and What are the inflection points? The challenge was to look back and take feedback from customers
from technology ecosystems and build greater long term database architectures as opposed to saying
“Here is the next server”, “Here is the next router”, and “Here is the next switch”.
Doug Saucier (VP- Avery Dennison): We’d got all this diverse infrastructure and it was not on the same
scale. So one could leverage a little bit here and take advantage of some innovation but not take
advantage on a larger scale. Then we had a huge opportunity to move the datacenter. We could have just
moved the data center but I used to drive race cars and flyer planes so I am not risk averse. so I talked the
organization into actually transforming our entire core technical infrastructure when we moved the data
center. Many people who have moved data centers said, ‘This guy is crazy!’. You cannot move and
transform it at the same time!
Many of our participants, and our extended audience of readers, could identify with the challenges mentioned.
Now for some strategies with which these challenges were addressed:
Strategies and Solutions
Doug’s strategy in moving and simplifying at the same time
Doug: We invested in what I call “forensic computing” which was to “Go find out what we have got and
put it someplace that we have access to, so that we could analyze it and look at all the complex
relationships.” This was to help us figure out what we could move and we could not move and how we
had to go about it and how we could simplify it.
The innovation was in deciding that our data centers would have three components; virtualized storage
component, mid range servers and X86 servers and that we would have a standard architecture across all
the three data centers; two active datacenters, one passive datacenter for DR and the platform based DR
strategy, based on virtualized strategy and that this will all be accomplished in 18 months.
Buddy’s views on getting business to understand the need for controlling complexity….
Buddy Willard: Make sure you protect your resources with first rate business partners. We recently
completely reshuffled the IT organization. We have got front end people that are truly business experts
who have general IT capability and understanding. So they sit with the business on a strategic level,
understanding where the business wants to go in order to have IT actually deliver what they need when
they need it, before they even know that they need it. They are constantly working with the business
leaders to determine out how we can migrate or retire their old solutions.
Empower someone to be your business process regulator and I do not think I can actually emphasize that
enough. Process is the key to making this work. It has got to be iterative across the board. You’ve got to
be able to manage change. You got to be able to manage how you did before and versus how you do it
in future.
Mike works in a highly regulated environment. Some simplification strategies, especially those
that involve Public Clouds are a no-no. His experience in enforcing internal standardisation …
Mike Bischoff: Our business is staggeringly heavily regulated. In the last count, we had about 36
different regulatory bodies that looked after us and that means we were very worried about data. So I do
not talk to my business about the public cloud as that scares them to death. I do talk about cloud
economics and it is something that we are heading to.
We decided that our complex IT set-up was non-sustainable. In partnership with TCS, we analyzed the
environment and found that we were using it as a warehouse: you have this really expensive well airconditioned warehouse that you are putting stuff into. We decided to stop doing that and start at the
first stage of infrastructure transformation which for us was about standardization. So when our business
units want to put something in our data center, we tell them what we can offer as a service. We tell them
that they can run on a small, medium and large one and we can also tell them that they can run it on the
Solaris box small, medium or large. We tell them not come to us with an application that runs on AS400
as we are not interested in it. If they absolutely have to have that application on AS400, can they please
have that running either as a software as a service or as a hybrid cloud?
Narayan Krishnakumar (KK), at EMC, experiments with both public and private cloud.
KK: A major part of simplification is our cloud strategy. We have been building our private cloud which
we started with our own internal datacenter. We have been virtualizing not only the OS but other things
as well. We also have a lot of interaction with the public cloud. One of our large implementations has
been with www.salesforce.com . We will continue to evolve our private cloud, which is our internal data
center. We will continue to stretch into the public cloud which is using mechanisms of federation and
virtualization and information virtualization as well as security. Some of the federated cloud stuff is still
coming; today, we do not necessarily have a cloud bursting model – but that is coming.
We spent a lot of time in what we call the “IT production space” between 2004 and 2009. We moved a
whole bunch of development test and IT owned applications on to that platform. We have done OS
virtualization and used that as a proxy. We spent a lot of time in stabilizing what we call the “virtualization
platform”. One aspect of it is OS virtualization but it stretches to other aspects of storage virtualization
and application virtualization. Then we started moving a whole bunch of mission critical applications.
The challenge here is usually stabilizing the platform itself and it is mostly about the technology. A lot of
care is needed around how much we are putting into the platform which is being virtualized. Questions
such as,” Is it stable?” “Is it reliable? etc. need to be answered. But we have already done our homework.
The third phase is “IT as a service” and this is something which we are just entering.
Desktop virtualization - One of the problems we have seen is that there are standard aging desktop
program and it stretches the refresh of desktops from 2-3 to 4 years. We are on Windows XP as a platform
(we never went down the Vista route). So one of the things that we looked at was using virtual desktops
as a mechanism to provide a footprint in server based environment which will give a Windows 7 kind of
footprint. We looked at the cost required for providing 10 desktops or laptops to the people based on just
having a virtual desktop. We will have about 40,000 plus users on virtual desktops by the end of this year.
Now did it actually provide the cost savings, is a little argumentative because we found with more
devices coming to the table including I-pads, people want to have their desktop as well as the laptop.
Those were some cool simplification strategies at an enterprise level. There were other viewpoints presented
as well….
Mark shared his company’s method in managing complex networks.
Mark Jaffe: Chasing the cause of incidence in complex networks is like chasing rabbits - and we have not
seen a rabbit going into the hole. I am going to start digging, but each hole that I dig requires time and
they are screaming over my shoulder. If I pick the hole right, I probably solve the problem quickly.
We have to understand the causal chain that led to an issue. Our problem-solvers mine through the
terabytes of potential data - logs performance data - in a couple-of-minutes window before and after the
incident. They deal with gigabytes and terabytes of data that machines created. We suggest that if these
machines can generate this data then machines should be able to analyze this data as well. So we are
leveraging what we call third generation machine learning techniques that are model based. These are
self learning based on the work done by PhDs in London, and now are productized technologies being
used in a number of large banks.
We have some revolutionary new approaches to solve the problem. The technology does not require any
new data collection mechanism. We typically start by having customers send us some data that we can
leverage off-site so that we can prove the value that we are comfortable with. Then the system gets
deployed in real time in the customer’s environment.
Sanjit shared his belief in simplifying the networks themselves.
Sanjit: We have a complete wireless product line: Access Points (APs) for various environments and cloud
hosted controllers. The APs are self configured, phone home to our data center and customers do all the
network configurations to the web-based control panel. There is no on-premise equipment to configure
the control of the network. It is a pretty intrusive interface, but sophisticated enough for enterprise IT
needs, giving full security and full integration with management services. Anytime an AP goes down, it
might get unplugged, or a network cable gets cut, the customer gets an email. The customer also has a
bunch of remote diagnostic tools that operate in real time. So if someone is calling from a branch (of the
customer organization) saying the wireless network is slow or having packet loss issues etc, he can run a
test from cloud control unit and the rest is done through the cloud.
There is actually a fair bit of intelligence in the AP and all the statistics are aggregated and managed in
the cloud. The interesting thing about all this is that it can be managed over the web by a very small and
completely untrained IT staff.
That was simplification from a wireless networking perspective. Jim spoke about trends at Cisco,
that can help simplification…
Jim: About a year and half ago, we took the organization and really broke it down from an architectural
perspective. We looked at the strategy for the last five to seven years around the model of consolidation,
virtualization, automation and then what we call the cloud. We looked at a long term strategy on how we
can help transform IT not only in terms of IT’s ability to deliver IT, but also how IT delivers its services back
into the business; how we can enable options around IT in terms of building a cloud provider; how we
can become a cloud provider. We looked at how we could disrupt the status quo, how we could disrupt
the legacy architectures that have been in place since the late 90s (what we call a PC based architecture).
We studied how we could reduce significant costs in terms of capex, moving to an opex model in terms of
how IT is consumed and how it is delivered.
So as we look in terms of “Are we transforming the way that customer does business? Can we make a
greater impact?” It becomes less and less about the bytes and bits, which is obviously important. It is
about how we do this in a broader technology ecosystem to mitigate risks and how do we do this in
terms of leveraging the trends around security, the trends of moving forward around cloud which we see
moving very, very quickly particularly in the UK around the public sector.
Harrick explained TCS R&D’s capability in simplifying complexity.
Harrick: At TCS we have built a suite of tools as well as an end-to-end work bench that actually allows us
to take a very systematic, very data driven and analytics-lead approach towards simplifying IT plants. The
standard templates and tools gather the right amount of data from your operational environment. We
will build an enterprise IT aware tool that actually understands what enterprise IT is all about. We can
then very quickly collate all this information , cleanse it so that it is available for analysis. We have several
data analysis tools which essentially look at the IT operations, both from technology as well as labor
perspective, to understand and bring transparency from infrastructure all the way to business processes.
I-transform is a planning tool which essentially automates the process of determining what to do, when
to do and why to do; from choosing the right set of technology, to sizing the environment, to identifying
a schedule for migration. We identify “move groups”. How do you move groups without breaking
anything, when you actually migrate from current state to the to-be state? TCS is very good at this. It is
traditionally a very process centric execution culture that says “How do I actually execute this
transformation plan in a very systematic factory based approach” because that is what is needed to
actually perform these types of transformation programs; act at a fast paced and with a high degree of
Our panelists explained the thinking and planning processes and hard work that went into in each attempt at
simplification. But then, each panelist also had success stories to tell.
The Achievements
Doug: We just went live with our second major move. We have met every date. We are up to 60-70%
virtualization cost because we took advantage of an opportunity. The innovation was in the funding side
too. The business was not going to invest. We had to fund all this from our operational savings over the
past 5 years. We accomplished this with some innovation and with TCS help.
Besides the milestones, there is transparency. How many of you know exactly to the server IP address,
what is in every one of your datacenters? I do. (Because I had to move them!) We could not have done
that without TCS’ i-transform and a bunch of work that actually allowed us to do a complete inventory
and then plan our moves. It has not been without pain - I mean it has been a ton of work. We have
stretched TCS’ product a little bit, but I think we have done a very, very good job at working together in
Mike: We have adopted a zero growth strategy in our data center. This means that we will not buy
anymore data centers space or have more contracts for any more nodes in our datacenters ever. I have
committed to our board that we will not ask for any more datacenter space or anymore power. The
forecast at the moment is that we have built in contracts with our datacenter providers and our power
supply that show 30% reduction in space and power over the next three years – as compared to earlier
forecast of 30% increase in power and space.
Sanjit: I will share a customer story.
Stanford Computer Science Department is our customer. They have a very sophisticated IT department
and they have a fair amount of staff on hand. But for them, the wireless network was running into issues.
Wi-Fi was not working properly, a lot of the professors were getting together for faculty meetings and
network was getting overloaded and essentially it was complexity they did not want to deal with. They
said “Hey we are running all these other services for the department. We need to put this under file
servers and there is other network infrastructure. Wireless is not something we want to worry about“.
Hence, they ended up taking our product, ripping out their physical AP. We deployed the whole
networking in four hours and the network was up and running. They loveit. Every time there is an event,
they can create a new virtual network and provide access to that. We constantly push our new features to
them and what is really fascinating about this model is that for them it is about being agile. They have all
these different demands being placed on the network, robots, cameras, professors working on new
research projects and so they need to adapt their network very quickly.
Mark: In an investment banker’s world, market data is everything. Trading decisions are in absolutely real
time moving from seconds to milliseconds and now to microseconds. The cost of latency in those
environments is that the bank there is a loss in the competitiveness. One of the banks that we work with
was having this issue at New York Stock Exchange. Opening in the morning for about 6 minutes traders
were not trading, which was a big opportunity lost. For six long months, they looked through gigabytes
and terabytes of data, looked for hundreds and thousands transparencies of matrix to try and find what
was related to the incident and what could have possibly caused it. After being unable to solve this
problem for a long time and with people screaming louder and louder as days passed, we were
brought in.
They have a customer portal which was failing. It is a customised Apache Java based application that was
failing. They did not see any error message suggesting the failure. They sent us a load of data through a
secure FTP and we put it through our engine. We analyzed it and found out the cause of the issue - a
configuration change that was appropriately intended but done improperly. Within a day we had isolated
the cause, saving them a million dollars.
KK: We have been on a big spree of virtualization as part of our overall infrastructure simplification. We
have 65% of our images virtualized and 71% of Intel virtualized. The other ones, which are around Solaris,
are the last mile for us because some of the large mission critical databases are Unix style hardware.
There is a lot of global support. We have won a bunch of awards. The benefits in the last couple of years
include more Opex savings than the Capex model. About 30 million pounds of carbon dioxide reduced
as well.
Many of our participants, and our extended audience of readers, could identify with the challenges mentioned.
Now for some strategies with which these challenges were addressed:
Great achievements, indeed. Finally, some takeaways and nugget of wisdom our panelists left us with…
Some Takeaways
Buddy Willard: Things have changed. We need more of a proactive approach to the business. Make sure
that you are there before they want to get there! The trick to that is making sure you are considering
everything done historically when moving to the new model.
Mike: IT is growing up. We are being prevented from buying new toys and tools and we are being asked
to operate more like a manufacturing organization.
Also, we are the victims of our own hype. I have stopped talking about cloud to my business because
they do not understand it and I am not helping them by talking about clouds.
I am really worried about governance when people can buy server power with their credit card like I did
last week. I bought a couple of servers from Rackspace and my firewall did not stop me, our governance
processes did not stop me, our security department did not stop me. So, what are we doing about that
kind of thing? Do you know that today that you do not have your data sitting on Amazon elastic cloud as
it is replicated back to the US and your data is not subject to that. That scares me to death. So I think we
are going to have to get some really strong governance models about how we manage this stuff going
forward as we become less traditional IT shops and frankly more governance and risk managers for
our business.
Doug: IT is a commodity, so manage it like a commodity. You just cannot turn it over to a vendor; you
have to architect you have to control your enterprise architecture.
Let us try to figure out how we can put our feet on the ground from a finance standpoint so that the next
three years, when the next cycle comes, we can actually NOT do what every IT department does, which is,
keep the budget the same and just buy more capacity because your infrastructure is so complex that you
cannot change it all out. (So you are just keeping the price the same and you are buying more capacity
instead of understanding your capacity and taking advantage of it, more likely to drive your cost down) .
KK: There is a lot of implication on the people also. What we have found is as you virtualize more you find
that the single administrator, for instance, can administer storage, network, compute and so he/she
started to see some of those roles converging but at the same time there are other roles that are
emerging in terms of cloud architects and cloud capacity planners, cloud folks who can talk to business,
pack it up as services and provide them with an agile kind of administrator.
As you have read, just as IT’s rapid evolution creates a painful legacy, smart minds are working on
managing this evolution so as to keep things simple. The TCS Innovation Forum 2010 provided glimpses
of strategies, standardization, business involvement, management tips and network ideas that can help
you simplify your IT infrastructure.
TCS believes that instead of relying on intuition and experience, we should rely extensively on data that
we can collect from the environment, build models of exactly what is going on. This brings a level of
transparency that allows you to understand your enterprise IT and use that insight to automate the
process of deriving a target state. Read more about Simplification Tools.
The TCS Experiment
We started a simple experiment related to social Q&A. We modeled an enterprise internal platform
based on Yahoo Answers. We created a taxonomy that mapped the way TCS employees work –
based on technology or specific process groups. Anybody could ask a question and anybody could
answer. We used video game mechanics in terms of badges and points and created a reputation
system. As more people use it, the system gets smarter at identifying experts. The system counts
vote ups, views, bookmarks to identify experts. In three years, the system has about 100,000 to
130,000 active users and close to about 100,000 active questions. We archive the old questions.
The interesting aspect is that this entire platform is moderated entirely by the community and there
is no 50-member team moderating questions.
Initially, nobody over 27 or 28 was an active user of the system. There was a tendency to look at it as
a fun place where you could even enquire about good restaurants in Chennai or about weekend
places. There was a dip in usage amongst the senior employees. We turned this challenge into an
opportunity for senior employees by making them post “challenges” (or problems relating to actual
project scenarios). They could set up these mercenary challenges saying “If you solve this problem,
you get this much virtual currency upfront and otherwise you get about 20 points for submitting an
idea” etc. On an average we see about 300 to 400 ideas for each challenge.
As ours is an IT organization, we mostly do knowledge work. We are in front of a laptop or a desktop
all day. It is part of our culture to belong to social networks. It is quite addictive, especially since the
virtual currencies and points are aggregated into a central rewards and recognition system.
For more information about TCS’ consulting services, contact [email protected]
Subscribe to TCS White Papers
TCS.com RSS: http://www.tcs.com/rss_feeds/Pages/feed.aspx?f=w
Feedburner: http://feeds2.feedburner.com/tcswhitepapers
About Tata Consultancy Services (TCS)
Tata Consultancy Services is an IT services, consulting and business solutions organization that
delivers real results to global business, ensuring a level of certainty no other firm can match.
TCS offers a consulting-led, integrated portfolio of IT and IT-enabled infrastructure, engineering
and assurance services. This is delivered through its unique Global Network Delivery Model ,
recognized as the benchmark of excellence in software development. A part of the Tata Group,
India’s largest industrial conglomerate, TCS has a global footprint and is listed on the National
Stock Exchange and Bombay Stock Exchange in India.
IT Services
Business Solutions
All content / information present here is the exclusive property of Tata Consultancy Services Limited (TCS). The content / information contained here is
correct at the time of publishing. No material from here may be copied, modified, reproduced, republished, uploaded, transmitted, posted or distributed in
any form without prior written permission from TCS. Unauthorized use of the content / information appearing here may violate copyright, trademark and
other applicable laws, and could result in criminal or civil penalties. Copyright © 2011 Tata Consultancy Services Limited
TCS Design Services I M I 05 I 11
For more information, visit us at www.tcs.com
Fly UP