What Is Data Science?

Data science is a form of applied statistics that incorporates elements of computer science and
mathematics to extract insight from both quantitative and qualitative data.

Tools and technologies used in data science include machine learning algorithms and frameworks, as well
as programming languages and visualization libraries.

A data scientist combines programming, mathematics, and domain knowledge to answer questions using data.

Why Is Data Science Important?

Data science practices keep businesses competitive and more productive.

Organizations that prioritize data science uncover trends and opportunities that might have gone unrealized had
they chose not to tap into the data available to them. The insights gained from data science can have a
tremendous impact on business outcomes.

Data science extracts useful information from both big datasets and small datasets. Although large amounts of
data are needed to train artificial intelligence (AI) systems, data science can still help with small datasets.

For example, retailers used to forecast inventory for their stores based on same-store sales. When the COVID-19
pandemic caused stores to close, retailers had to change their forecasting methods as the amount and type of
data available changed.

When there is only a small amount of data to look at, data science uses practices like data augmentation,
synthetic data generation, transfer learning, and ensemble learning to supply insights.

Data science also enables an organization to build additional resiliency. In this rapidly changing, technological
world where things can change at a moment’s notice, businesses need to be able to adapt and respond quickly in
order to survive — and data science can help facilitate that.

Data science is leveraged by many organizations and has so many industry-specific applications. Organizations
that don’t leverage it risk falling behind — or shutting down all together.

Data Science Lifecycle

Data science is a cyclical process. The lifecycle can be broken down into the following steps:

topic expertise
Topic Expertise: For starters, a data scientist needs to have a basic understanding of the topic or problem they are trying to explore so that they can ask meaningful questions about that topic or problem. The nature of data science is to seek explanations for why things are the way they are. A foundation of topic expertise defines the need for a data science project and leads to more confident, data-driven decisions.
data aquistion
Data Acquisition: The next step in the data science lifecycle is collecting the right data to help answer the defined question. The data might live in a variety of places or be difficult to access depending on a person’s skill set. But the success of the rest of the data science process is dependent on the quality of data collected in this step — and how well it is prepared.
data preparation
Data Preparation: Data Preparation is the most time-consuming — and arguably most important — step in the data science cycle. As the saying goes, if you put garbage in, you’ll get garbage out. Data needs to be properly cleaned and blended ahead of analysis. This might include integrating disparate data sources, handling missing values and outliers, and more. During this iterative step, a data scientist might realize they need to go back and gather more data.
data exploration
Data Exploration: Data exploration involves identifying and understanding patterns in a dataset. Once the data is clean and usable, data scientists can spend time getting to know the data and forming hypotheses to test. This is another iterative step in an iterative process, and a data scientist might need to take one or two steps back to perform additional cleansing and blending based on findings. This practice includes reviewing the distinct attributes of each data point, or “features” in the dataset, and determining whether further blending and data transformations yielded potentially meaningful new features. The process of creating new features in data is often referred to as “feature engineering.” It typically occurs in the interplay between the data exploration and data preparation steps.
predictive modeling and exploration
Predictive Modeling and Evaluation: After exploration, a data scientist can start training predictive models. Predictive modeling and can often blend together with data exploration. Once the modeling and evaluation begins, it’s likely that a data scientist will notice new things about the features in the dataset and take another step back to iterate on the feature engineering. As models are built, they need to be assessed. A data scientist should continue to test and refine models until they end up with one they are happy with.
interpretation and deployment
Interpretation and Deployment: The outcome of all this work might be an interpretation of the data and results, where the data scientist uses the model and all of the analysis they’ve conducted throughout the lifecycle to answer the question they started with. Another outcome might be that the model is destined for deployment, where it will be used to help stakeholders make data-driven decisions or automate a process (if this is your outcome, don’t forget about the next step — monitoring).
monitoring
Monitoring: After the model is deployed, it needs to be checked and maintained, so it can keep performing properly even as it receives new data. Models need to be monitored so that when data shifts due to changes in behavior or other factors, model adjustments can be made accordingly.
repeat
Repeat: The cycle repeats itself whether or not the final goal was immediate interpretation or longer-term deployment. The ultimate outcome of any data science project should be to learn something new about the topic or problem being explored, which in turn increases topic expertise and then leads to asking new, deeper questions.

Data Science Applications Across Different Industries

Companies use data science every day to improve their products and internal operations. Almost any type of
business in any industry can benefit from practicing data science.

Some example use cases include:

  • An energy software company using recommendation models to match eligible customers with new or existing energy products
  • A financial services company using machine learning models to reach prospective customers that may have been overlooked by traditional banking institutions
  • A car sharing company using dynamic pricing models to suggest prices to the people who list and rent out cars
  • A higher education institution combining data from transcripts, standardized test scores, demographics and more to identify students at risk of not graduating
  • A fintech company using a combination of complex data lookups and decision algorithms to assess whether a loan applicant is fraudulent

Dive into each of these use cases in this whitepaper Data Science in Practice: Five Common Applications.

Business Intelligence vs Data Science

While data science has significant business applications, its focus is broader and tactics more diverse than business intelligence.

Business intelligence leverages statistics and visualization tools against traditional structured data to
describe and present current and historical trends in a way that’s easy for people to consume and understand.

Data science leverages these approaches as well as machine learning against both structured and
unstructured data to investigate relationships and discover likely outcomes or optimal actions.

While business intelligence’s most typical output is some form of report or dashboard (thus informing a
human, who will make a best-estimate decision), data science produces decisions and actions that can be
executed directly.

Who Can Use Data Science?

Despite what many think, data scientists aren’t the only ones who use data science. In reality, anyone can do
data science. Thanks to technology advancements, data science no longer requires specialized coding knowledge or
advanced statistical know-how. “Drag-and-drop” data science is now a widely-accepted and viable form of data
science, giving analysts and other data workers the power to build and deploy models at scale. These “citizen
data scientists,” or data workers who can wield advanced analytics without knowing the intricacies of the
back-end processes, are a highly sought-after demographic of worker.

Because data science is so in demand, because traditional data scientists often command high salaries, and
because their limited number can create bottlenecks, citizen data scientists are seen as a data science
multiplier. With appropriate checks in place, citizen data scientists can largely ramp up model production in
any corporation, driving insights and revenue that would otherwise be impossible.

How to Get Started With Data Science

Alteryx Analytics Automation Platform allows you to build automated and
repeatable workflows that can make the larger data science process easier and more efficient. Data access,
preparation, modeling, and sharing of analytic results all happens in the same place, on one easy-to-use
platform.

You can also learn how to integrate Alteryx with
Snowflake
, a cloud-based data storage and analytics tool,
using our starter
kit
. Using the two
together makes it easy to drive analytic and data science outcomes in the cloud.