All about Python Programming Language and Course Schedule

 


JuTT Developer Series

Python for Programmers

JuTT BaDshaH

  1. JuTT BaDshaH®
  2. Playlists
  3. History
  4. Topics
  5. Learning Paths
  6. Offers & Deals
  7. Highlights
  8. Settings
  9. Support
  10. Sign Out


Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals.
The authors and publisher have taken care in the preparation of this book, but make no
expressed or implied warranty of any kind and assume no responsibility for errors or
omissions. No liability is assumed for incidental or consequential damages in
connection with or arising out of the use of the information or programs contained
herein.
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs; and content particular to your business, training goals, marketing focus, or branding
interests), please contact me at juttbadshah1120@gmail.com or Whatsapp.
Library of Congress Control Number: 2019933267 Copyright © 2020 Juttbadshah Education, Inc.
All rights reserved. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. For information regarding permissions, request forms, and the appropriate contacts within the Pearson Education Global Rights & Permissions Department, please visit

JuTT BaDshaH and the double­thumbs­up bug are registered trademarks of JuTT BaDshaH and Associates, Inc.
Python logo courtesy of the Programming King.
Cover design by JuTT BaDshaH,
Cover art by Juttbadshah/Shutterstock
ISBN­13: 978­0­13­522433­5
ISBN­10: 0­13­522433­0
1 19

Reface

“There’s gold in them thar hills!”
Welcome to Python for Programmers! In this Course, you’ll learn hands­on with today’s most compelling, leading­edge computing technologies, and you’ll program in Python—one of the world’s most popular languages and the fastest growing among them.Developers often quickly discover that they like Python. They appreciate its expressive power, readability, conciseness and interactivity. They like the world of open­source software development that’s generating a rapidly growing base of reusable software for an enormous range of application areas.For many decades, some powerful trends have been in place. Computer hardware has rapidly
been getting faster, cheaper and smaller. Internet bandwidth has rapidly been getting larger and cheaper. And quality computer software has become ever more abundant and essentially free or nearly free through the “open source” movement. Soon, the “Internet of Things” will connect tens of billions of devices of every imaginable type. These will generate enormous volumes of data at rapidly increasing speeds and quantities.In computing today, the latest innovations are “all about the data”—data science, data
analytics, big data, relational databases (SQL), and NoSQL and NewSQL databases, each of which we address along with an innovative treatment of Python programming.

JOBS REQUIRING DATA SCIENCE SKILLS

In 2011, McKinsey Global Institute produced their report, “Big data: The next frontier for innovation, competition and productivity.” In it, they said, “The United States alone faces a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts to analyze big data and make decisions based on their findings.” This continues to be the case. The August 2018 “LinkedIn Workforce Report” says the United States has a shortage of over 150,000 people with data science skills. A 2017 report from IBM, Burning Glass Technologies and the Business­Higher Education Forum, says that by 2020 in the United States there will be hundreds of thousands of new jobs requiring data science skills.
  1. https://www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Digital/Our%20Insights/Big%20data%20The%20next%20frontier%20for%20innovation/MGI_big_data_full_report.ashx.
  2. https://economicgraph.linkedin.com/resources/linkedin­workforce-report­august­2018.
  3. https://www.burning­glass.com/wp-content/uploads/The_Quant_Crunch.pdf.
MODULAR ARCHITECTURE
The book’s modular architecture (please see the Table of Contents graphic on the
book’s inside front cover) helps us meet the diverse needs of various professional audiences.
Chapters 1–10 cover Python programming. These chapters each include a brief Intro to
Data Science section introducing artificial intelligence, basic descriptive statistics,
measures of central tendency and dispersion, simulation, static and dynamic visualization, working with CSV files, pandas for data exploration and data wrangling, time series and imple linear regression. These help you prepare for the data science, AI, big data and cloud
case studies in
Chapters 11–16, which present opportunities for you to use real­world
datasets in complete case studies.
After covering Python
Chapters 1– 5 and a few key parts of
Chapters 6– 7 , you’ll be able to handle significant portions of the case studies in
Chapters 11–16. The “Chapter Dependencies” section of this Preface will help trainers plan their professional courses in the context of the book’s unique architecture.
Chapters 11–16 are loaded with cool, powerful, contemporary examples. They present hands-on implementation case studies on topics such as natural language processing, data mining Twitter, cognitive computing with IBM’s Watson, supervised machine learning with classification and regression, unsupervised machine learning with clustering, deep learning with convolutional neural networks, deep learning with recurrent neural networks, big data with Hadoop, Spark and NoSQL
databases, the Internet of Things and more. Along the way, you’ll acquire a broad
literacy of data science terms and concepts, ranging from brief definitions to using concepts in small, medium and large programs. Browsing the book’s detailed Table of Contents and Index will give you a sense of the breadth of coverage.

KEY FEATURES

KIS (Keep It Simple), KIS (Keep it Small), KIT (Keep it Topical)

Keep it simple—In every aspect of the book, we strive for simplicity and clarity. For example, when we present natural language processing, we use the simple and intuitive TextBlob library rather than the more complex NLTK. In our deep learning presentation, we prefer Keras to TensorFlow. In general, when multiple libraries could be used to perform similar tasks, we use the simplest one.Keep it small—Most of the book’s 538 examples are small—often just a few lines of code, with immediate interactive IPython feedback. We also include 40 larger scripts and in­depth case studies.Keep it topical—We read scores of recent Python­programming and data science books, and browsed, read or watched about 15,000 current articles, research papers, white papers, videos, blog posts, forum posts and documentation pieces. This enabled us to “take the pulse” of the Python, computer science, data science, AI, big data and cloud communities.

Immediate-Feedback: Exploring, Discovering and Experimenting with IPython

The ideal way to learn from this book is to read it and run the code examples in parallel.Throughout the book, we use the IPython interpreter, which provides a friendly, immediate­feedback interactive mode for quickly exploring, discovering and experimenting with Python and its extensive libraries. Most of the code is presented in small, interactive IPython sessions. For each code snippet you write, IPython immediately reads it, evaluates it and prints the results. This instant feedback keeps your attention, boosts learning, facilitates rapid prototyping and speeds the software­development process.Our books always emphasize the live­code approach, focusing on complete, working programs with live inputs and outputs. IPython’s “magic” is that it turns even snippets into code that “comes alive” as you enter each line. This promotes learning and encourages experimentation.

Python Programming Fundamentals

First and foremost, this book provides rich Python coverage.We discuss Python’s programming models—procedural programming, functional­ tyle programming and object­oriented programming. We use best practices, emphasizing current idiom.
Functional­style programming is used throughout the book as appropriate. A chart in
Chapter 4 lists most of Python’s key functional­style programming capabilities and the chapters in which we initially cover most of them.

538 Code Examples

You’ll get an engaging, challenging and entertaining introduction to Python with 538 real­world examples ranging from individual snippets to substantial computer
science, data science, artificial intelligence and big data case studies. You’ll attack significant tasks with AI, big data and cloud technologies like natural language processing, data mining Twitter, machine learning, deep learning, Hadoop, MapReduce­, Spark, IBM Watson, key data science libraries (NumPy, pandas, SciPy, NLTK, TextBlob, spaCy, Textatistic, Tweepy, Scikit­learn, Keras), key visualization libraries (Matplotlib, Seaborn, Folium) and more.

Avoid Heavy Math in Favor of English Explanations

We capture the conceptual essence of the mathematics and put it to work in our examples. We do this by using libraries such as statistics, NumPy, SciPy, pandas and many others, which hide the mathematical complexity. So, it’s straightforward for you to get many of the benefits of mathematical techniques like linear regression without having to know the mathematics behind them. In the machine­learning and deep-learning examples, we focus on creating objects that do the math for you “behind the scenes.”

Visualizations

67 static, dynamic, animated and interactive visualizations (charts, graphs, pictures, animations etc.) help you understand concepts. Rather than including a treatment of low­level graphics programming, we focus on high-level visualizations produced by Matplotlib, Seaborn, pandas and Folium (for interactive maps). We use visualizations as a pedagogic tool. For example, we make the law of large numbers “come alive” in a dynamic die­rolling simulation and bar chart. As the
number of rolls increases, you’ll see each face’s percentage of the total rolls gradually
approach 16.667% (1/6th) and the sizes of the bars representing the percentages equalize. Visualizations are crucial in big data for data exploration and communicating reproducible research results, where the data items can number in the millions, billions or more. A common saying is that a picture is worth a thousand words —in big data, a visualization could be worth billions, trillions or even more items in a database. Visualizations enable you to “fly 40,000 feet above the data” to see it “in the large” and to get to know your data. Descriptive statistics help but can be misleading. For example, Anscombe’s quartet demonstrates through visualizations that significantly dif erent
datasets can have nearly identical descriptive statistics.
We show the visualization and animation code so you can implement your own. We also provide the animations in source ­code files and as Jupyter Notebooks, so you can conveniently customize the code and animation parameters, re­execute the animations and see the effects of the changes.

Data Experiences 

Our Intro to Data Science sections and case studies in
Chapters 11–16 provide rich
data experiences.You’ll work with many real­world datasets and data sources. There’s an enormous variety of free open datasets available online for you to experiment with. Some of the sites we reference list hundreds or thousands of datasets. Many libraries you’ll use come bundled with popular datasets for experimentation. You’ll learn the steps required to obtain data and prepare it for analysis, analyze that data using many techniques, tune your models and communicate your results effectively,
especially through visualization.

GitHub

GitHub is an excellent venue for finding open­source code to incorporate into your
projects (and to contribute your code to the open­source community). It’s also a crucial
element of the software developer’s arsenal with version control tools that help teams of developers manage open­source (and private) projects. You’ll use an extraordinary range of free and open­source Python and data science libraries, and free, free­trial and freemium offerings of software and cloud services. Many of the libraries are hosted on GitHub.

Hands-On Cloud Computing

Much of big data analytics occurs in the cloud, where it’s easy to scale dynamically the amount of hardware and software your applications need. You’ll work with various cloud-based services (some directly and some indirectly), including Twitter, Google
Translate, IBM Watson, Microsoft Azure, OpenMapQuest, geopy, Dweet.io and PubNub. We encourage you to use free, free trial or freemium cloud services. We prefer those that don’t require a credit card because you don’t want to risk accidentally running up big bills. If you decide to use a service that requires a credit card, ensure that the tier you’re using for free will not automatically jump to a paid tier.

Database, Big Data and Big Data Infrastructure

According to IBM (Nov. 2016), 90% of the world’s data was created in the last two years. Evidence indicates that the speed of data creation is rapidly accelerating.
According to a March 2016 AnalyticsWeek article, within five years there will be over 50 billion devices connected to the Internet and by 2020 we’ll be producing 1.7 megabytes of new data every second for every person on the planet!
We include a treatment of relational databases and SQL with SQLite. Databases are critical big data infrastructure for storing and manipulating the massive amounts of data you’ll process. Relational databases process structured data—they’re not geared to the unstructured and semi­structured data in big data applications.
So, as big data evolved, NoSQL and NewSQL databases were created to handle such data efficiently. We include a NoSQL and NewSQL overview and a hands­on case study with a MongoDB JSON document database. MongoDB is the most popular NoSQL database. We discuss big data hardware and software infrastructure in
Chapter 16, “ Big data: Hadoop, Spark, NoSQL and IoT (Internet of Things).”

Artificial Intelligence Case Studies

In case study
Chapters 11–15, we present artificial intelligence topics, including natural language processing, data mining Twitter to perform sentiment analysis, cognitive computing with IBM Watson, supervised machine learning, unsupervised machine learning and deep learning.
Chapter 16 presents the big data hardware and software infrastructure that enables computer scientists and data scientists to implement leading­edge AI­based solutions.

Built-In Collections: Lists, Tuples, Sets, Dictionaries

There’s little reason today for most application developers to build custom data
structures. The book features a rich two­chapter treatment of Python’s built­in
data structures—lists, tuples, dictionaries and sets—with which most data-
structuring tasks can be accomplished.

Array-Oriented Programming with NumPy Arrays and Pandas Series/DataFrames

We also focus on three key data structures from open­source libraries—NumPy arrays,
pandas Series and pandas DataFrames. These are used extensively in data science,
computer science, artificial intelligence and big data. NumPy offers as much as two orders of magnitude higher performance than built­in Python lists. We include in
Chapter 7 a rich treatment of NumPy arrays. Many libraries, such as pandas, are built on NumPy. The Intro to Data Science sections in
Chapters 7– 9 introduce pandas Series and DataFrames, which along with NumPy arrays are then used throughout the remaining chapters.

File Processing and Serialization

Chapter 9 presents text­file processing, then demonstrates how to serialize objects using the popular JSON (JavaScript Object Notation) format. JSON is used frequently in the data science chapters. Many data science libraries provide built­in file­processing capabilities for loading datasets into your Python programs. In addition to plain text files, we process files in the popular CSV (comma­separated values) format using the Python Standard Library’s csv module and capabilities of the pandas data science library.

Object-Based Programming

We emphasize using the huge number of valuable classes that the Python open­source
community has packaged into industry standard class libraries. You’ll focus on knowing what libraries are out there, choosing the ones you’ll need for your apps, creating objects from existing classes (usually in one or two lines of code) and making them “jump, dance and sing.” This object­based programming enables you to build impressive applications quickly and concisely, which is a significant part of Python’s appeal. With this approach, you’ll be able to use machine learning, deep learning and other AI technologies to quickly solve a wide range of intriguing problems, including cognitive computing challenges like speech recognition and computer vision.

Object-Oriented Programming

Developing custom classes is a crucial object­oriented­ programming skill, along
with inheritance, polymorphism and duck typing. We discuss these in Chapter 10.
Chapter 10 includes a discussion of unit testing with doctest and a fun card-
shuffling-­and-­dealing simulation.
Chapters 11–16 require only a few straightforward custom class definitions. In Python, you’ll probably use more of an object­based programming approach than full­out object-oriented programming.

Reproducibility

In the sciences in general, and data science in particular, there’s a need to reproduce the results of experiments and studies, and to communicate those results effectively. Jupyter Notebooks are a preferred means for doing this. We discuss reproducibility throughout the book in the context of programming techniques and software such as Jupyter Notebooks and Docker.

Performance

We use the %timeit profiling tool in several examples to compare the performance of
different approaches to performing the same tasks. Other performance­related
discussions include generator expressions, NumPy arrays vs. Python lists, performance of machine­learning and deep­learning models, and Hadoop and Spark distributed-
computing performance.

Big Data and Parallelism

In this book, rather than writing your own parallelization code, you’ll let libraries like
Keras running over TensorFlow, and big data tools like Hadoop and Spark parallelize
operations for you. In this big data/AI era, the sheer processing requirements of massive data applications demand taking advantage of true parallelism provided by multicore processors, graphics processing units (GPUs), tensor processing units (TPUs)
and huge clusters of computers in the cloud. Some big data tasks could have
thousands of processors working in parallel to analyze massive amounts of data
expeditiously.

 CHAPTER DEPENDENCIES

If you’re a trainer planning your syllabus for a professional training course or a developer deciding which chapters to read, this section will help you make the best decisions. Please read the one­page color Table of Contents on the book’s inside front cover—this will quickly familiarize you with the book’s unique architecture. Teaching or reading the chapters in order is easiest. However, much of the content in the Intro to Data Science sections at the ends of

Chapters 1–10 and the case studies in

Chapters 11–16 requires only

Chapters 1– 5 and small portions of

Chapters 6–10 as discussed below.

Part 1: Python Fundamentals Quickstart

We recommend that you read all the chapters in order:

Chapter 1, Introduction to Computers and Python, introduces concepts that lay the groundwork for the Python programming in

Chapters 2–10 and the big data,artificial­intelligence and cloud­based case studies in

Chapters 11–16. The chapter also includes test­drives of the IPython interpreter and Jupyter Notebooks.

Chapter 2, Introduction to Python Programming, presents Python programming fundamentals with code examples illustrating key language features.

Chapter 3, Control Statements, presents Python’s control statements and introduces basic list processing.

Chapter 4, Functions, introduces custom functions, presents simulation techniques with random­number generation and introduces tuple fundamentals.

Chapter 5, Sequences: Lists and Tuples, presents Python’s built­in list and tuple collections in more detail and begins introducing functional­style programming.

Part 2: Python Data Structures, Strings and Files

The following summarizes inter chapter dependencies for Python Chapters 6– 9 and assumes that you’ve read Chapters 1– 5 .

Chapter 6, Dictionaries and Sets—The Intro to Data Science section in this chapter is not dependent on the chapter’s contents.

Chapter 7, Array­Oriented Programming with NumPy—The Intro to Data Science section requires dictionaries (Chapter 6) and arrays (Chapter 7).

Chapter 8, Strings: A Deeper Look—The Intro to Data Science section requires raw strings and regular expressions (Sections 8.11–8.12), and pandas Series and DataFrame features from Section 7.14’s Intro to Data Science.

Chapter 9, Files and Exceptions—For JSON serialization, it’s useful to understand dictionary fundamentals (Section 6.2). Also, the Intro to Data Science section requires the built­in open function and the with statement (Section 9.3), and pandas DataFrame features from Section 7.14’s Intro to Data Science.

Part 3: Python High-End Topics

The following summarizes inter­chapter dependencies for Python

Chapter 10 and assumes that you’ve read Chapters 1– 5 .

Chapter 10, Object­Oriented Programming—The Intro to Data Science section requires pandas DataFrame features from Intro to Data Science Section 7.14. Trainers wanting to cover only classes and objects can present Sections 10.1–10.6. Trainers wanting to cover more advanced topics like inheritance, polymorphism and duck typing, can presentSections 10.7–10.9.Sections 10.10–10.15 provide additional advanced perspectives.

Next Part

*

Post a Comment (0)
Previous Post Next Post