EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book THE BEST WAY TO LEARN JAVA GUI WITH MYSQL AND SQL SERVER

Download or read book THE BEST WAY TO LEARN JAVA GUI WITH MYSQL AND SQL SERVER written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2020-01-13 with total page 546 pages. Available in PDF, EPUB and Kindle. Book excerpt: This hands-on tutorial/reference/guide to MySQL and SQL Server is not only perfect for students and beginners, but it also works for experienced developers who aren't getting the most from MySQL and SQL Server. As you would expect, this book shows how to build from scratch two different databases: MySQL and SQL Server using Java. In designing a GUI and as an IDE, you will make use of the NetBeans tool. In the first chapter, you will learn: How to install NetBeans, JDK 11, and MySQL Connector/J; How to integrate external libraries into projects; How the basic MySQL commands are used; How to query statements to create databases, create tables, fill tables, and manipulate table contents is done. In the second chapter, you will study: Creating the initial three table projects in the school database: Teacher table, TClass table, and Subject table; Creating database configuration files; Creating a Java GUI for viewing and navigating the contents of each table; Creating a Java GUI for inserting and editing tables; and Creating a Java GUI to join and query the three tables. In the third chapter, you will learn: Creating the main form to connect all forms; Creating a project will add three more tables to the school database: the Student table, the Parent table, and Tuition table; Creating a Java GUI to view and navigate the contents of each table; Creating a Java GUI for editing, inserting, and deleting records in each table; Creating a Java GUI to join and query the three tables and all six. In chapter four, you will study how to query the six tables. In chapter five, you will be taught how to create Crime database and its tables. In chapter six, you will be taught how to extract image features, utilizing BufferedImage class, in Java GUI. In chapter seven, you will be taught to create Java GUI to view, edit, insert, and delete Suspect table data. This table has eleven columns: suspect_id (primary key), suspect_name, birth_date, case_date, report_date, suspect_ status, arrest_date, mother_name, address, telephone, and photo. In chapter eight, you will be taught to create Java GUI to view, edit, insert, and delete Feature_Extraction table data. This table has eight columns: feature_id (primary key), suspect_id (foreign key), feature1, feature2, feature3, feature4, feature5, and feature6. In chapter nine, you will add two tables: Police_Station and Investigator. These two tables will later be joined to Suspect table through another table, File_Case, which will be built in the seventh chapter. The Police_Station has six columns: police_station_id (primary key), location, city, province, telephone, and photo. The Investigator has eight columns: investigator_id (primary key), investigator_name, rank, birth_date, gender, address, telephone, and photo. Here, you will design a Java GUI to display, edit, fill, and delete data in both tables. In chapter ten, you will add two tables: Victim and File_Case. The File_Case table will connect four other tables: Suspect, Police_Station, Investigator and Victim. The Victim table has nine columns: victim_id (primary key), victim_name, crime_type, birth_date, crime_date, gender, address, telephone, and photo. The File_Case has seven columns: file_case_id (primary key), suspect_id (foreign key), police_station_id (foreign key), investigator_id (foreign key), victim_id (foreign key), status, and description. Here, you will also design a Java GUI to display, edit, fill, and delete data in both tables. Finally, this book is hopefully useful and can improve database programming skills for every Java/MySQL/SQL SERVER programmer.

Book The Best Way to Learn Java GUI with MySQL  MariaDB  and PostgreSQL

Download or read book The Best Way to Learn Java GUI with MySQL MariaDB and PostgreSQL written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2020-01-10 with total page 535 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, you will create three Java GUI applications using MySQL, MariaDB, and PostgreSQL. In this book, you will learn how to build from scratch a database management system using Java. In designing a GUI and as an IDE, you will make use of the NetBeans tool. Gradually and step by step, you will be taught how to utilize three different databases in Java. In chapter one, you will create School database and its six tables. In chapter two, you will study: Creating the initial three table projects in the school database: Teacher table, TClass table, and Subject table; Creating database configuration files; Creating a Java GUI for viewing and navigating the contents of each table; Creating a Java GUI for inserting and editing tables; and Creating a Java GUI to join and query the three tables. In chapter three, you will learn: Creating the main form to connect all forms; Creating a project will add three more tables to the school database: the Student table, the Parent table, and Tuition table; Creating a Java GUI to view and navigate the contents of each table; Creating a Java GUI for editing, inserting, and deleting records in each table; Creating a Java GUI to join and query the three tables and all six. In chapter four, you will study how to query the six tables. In chapter five, you will learn the basics of cryptography using Java. Here, you will learn how to write a Java program to count Hash, MAC (Message Authentication Code), store keys in a KeyStore, generate PrivateKey and PublicKey, encrypt / decrypt data, and generate and verify digital prints. In chapter six, you will create Bank database and its tables. In chapter seven, you will learn how to create and store salt passwords and verify them. You will create a Login table. In this case, you will see how to create a Java GUI using NetBeans to implement it. In addition to the Login table, in this chapter you will also create a Client table. In the case of the Client table, you will learn how to generate and save public and private keys into a database. You will also learn how to encrypt / decrypt data and save the results into a database. In chapter eight, you will create an Account table. This account table has the following ten fields: account_id (primary key), client_id (primarykey), account_number, account_date, account_type, plain_balance, cipher_balance, decipher_balance, digital_signature, and signature_verification. In this case, you will learn how to implement generating and verifying digital prints and storing the results into a database. In chapter nine, you will create a Client_Data table, which has the following seven fields: client_data_id (primary key), account_id (primary_key), birth_date, address, mother_name, telephone, and photo_path. In chapter ten, you will be taught how to create Crime database and its tables. In chapter eleven, you will be taught how to extract image features, utilizing BufferedImage class, in Java GUI. In chapter twelve, you will be taught to create Java GUI to view, edit, insert, and delete Suspect table data. This table has eleven columns: suspect_id (primary key), suspect_name, birth_date, case_date, report_date, suspect_ status, arrest_date, mother_name, address, telephone, and photo. In chapter thirteen, you will be taught to create Java GUI to view, edit, insert, and delete Feature_Extraction table data. This table has eight columns: feature_id (primary key), suspect_id (foreign key), feature1, feature2, feature3, feature4, feature5, and feature6. In chapter fourteen, you will add two tables: Police_Station and Investigator. These two tables will later be joined to Suspect table through another table, File_Case. The Police_Station has six columns: police_station_id (primary key), location, city, province, telephone, and photo. The Investigator has eight columns: investigator_id (primary key), investigator_name, rank, birth_date, gender, address, telephone, and photo. Here, you will design a Java GUI to display, edit, fill, and delete data in both tables. In chapter fifteen, you will add two tables: Victim and File_Case. The File_Case table will connect four other tables: Suspect, Police_Station, Investigator and Victim. The Victim table has nine columns: victim_id (primary key), victim_name, crime_type, birth_date, crime_date, gender, address, telephone, and photo. The File_Case has seven columns: file_case_id (primary key), suspect_id (foreign key), police_station_id (foreign key), investigator_id (foreign key), victim_id (foreign key), status, and description. Here, you will also design a Java GUI to display, edit, fill, and delete data in both tables.

Book The Best Guide to Database Programming with Java GUI  PostgreSQL  and SQL Server

Download or read book The Best Guide to Database Programming with Java GUI PostgreSQL and SQL Server written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2020-01-13 with total page 450 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book offers the straightforward, practical answers you need to help you do your job. This hands-on tutorial/reference/guide to PostgreSQL and SQL Server is not only perfect for students and beginners, but it also works for experienced developers who aren't getting the most from PostgreSQL and SQL Server. As you would expect, this book shows how to build from scratch two different databases: PostgreSQL and SQL Server using Java. In designing a GUI and as an IDE, you will make use of the NetBeans tool. In chapter one, you will learn: How to install NetBeans, JDK 11, and the PostgreSQL connector; How to integrate external libraries into projects; How the basic PostgreSQL commands are used; How to query statements to create databases, create tables, fill tables, and manipulate table contents is done. In chapter two, you will learn querying data from the postgresql using jdbc including establishing a database connection, creating a statement object, executing the query, processing the resultset object, querying data using a statement that returns multiple rows, querying data using a statement that has parameters, inserting data into a table using jdbc, updating data in postgresql database using jdbc, calling postgresql stored function using jdbc, deleting data from a postgresql table using jdbc, and postgresql jdbc transaction. In chapter three, you will learn the basics of cryptography using Java. Here, you will learn how to write a Java program to count Hash, MAC (Message Authentication Code), store keys in a KeyStore, generate PrivateKey and PublicKey, encrypt / decrypt data, and generate and verify digital prints. You will also learn how to create and store salt passwords and verify them. In chapter four, you will create a PostgreSQL database, named Bank, and its tables. In chapter five, you will create a Login table. In this case, you will see how to create a Java GUI using NetBeans to implement it. In addition to the Login table, in this chapter you will also create a Client table. In the case of the Client table, you will learn how to generate and save public and private keys into a database. You will also learn how to encrypt / decrypt data and save the results into a database. In chapter six, you will create an Account table. This account table has the following ten fields: account_id (primary key), client_id (primarykey), account_number, account_date, account_type, plain_balance, cipher_balance, decipher_balance, digital_signature, and signature_verification. In this case, you will learn how to implement generating and verifying digital prints and storing the results into a database. In chapter seven, you create a table named Client_Data, which has seven columns: client_data_id (primary key), account_id (primary_key), birth_date, address, mother_name, telephone, and photo_path. In chapter eight, you will be taught how to create a SQL Server database, named Crime, and its tables. In chapter nine, you will be taught how to extract image features, utilizing BufferedImage class, in Java GUI. In chapter ten, you will be taught to create Java GUI to view, edit, insert, and delete Suspect table data. This table has eleven columns: suspect_id (primary key), suspect_name, birth_date, case_date, report_date, suspect_ status, arrest_date, mother_name, address, telephone, and photo. In chapter eleven, you will be taught to create Java GUI to view, edit, insert, and delete Feature_Extraction table data. This table has eight columns: feature_id (primary key), suspect_id (foreign key), feature1, feature2, feature3, feature4, feature5, and feature6. In chapter twelve, you will add two tables: Police_Station and Investigator. These two tables will later be joined to Suspect table through another table, File_Case, which will be built in the seventh chapter. The Police_Station has six columns: police_station_id (primary key), location, city, province, telephone, and photo. The Investigator has eight columns: investigator_id (primary key), investigator_name, rank, birth_date, gender, address, telephone, and photo. Here, you will design a Java GUI to display, edit, fill, and delete data in both tables. In chapter thirteen, you will add two tables: Victim and File_Case. The File_Case table will connect four other tables: Suspect, Police_Station, Investigator and Victim. The Victim table has nine columns: victim_id (primary key), victim_name, crime_type, birth_date, crime_date, gender, address, telephone, and photo. The File_Case has seven columns: file_case_id (primary key), suspect_id (foreign key), police_station_id (foreign key), investigator_id (foreign key), victim_id (foreign key), status, and description. Here, you will also design a Java GUI to display, edit, fill, and delete data in both tables. Finally, this book is hopefully useful and can improve database programming skills for every Java/PostgreSQL/SQL Server programmer.

Book The Best Tutorial to Learn Database Programming with Java GUI  MariaDB  and SQL Server

Download or read book The Best Tutorial to Learn Database Programming with Java GUI MariaDB and SQL Server written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2020-01-08 with total page 404 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book explains relational theory in practice, and demonstrates through two projects how you can apply it to your use of MariaDB and SQL Server databases. This book covers the important requirements of teaching databases with a practical and progressive perspective. This book offers the straightforward, practical answers you need to help you do your job. This hands-on tutorial/reference/guide to MariaDB and SQL Server is not only perfect for students and beginners, but it also works for experienced developers who aren't getting the most from MariaDB and SQL Server. As you would expect, this book shows how to build from scratch two different databases: MariaDB and SQL Server using Java. In designing a GUI and as an IDE, you will make use of the NetBeans tool. In chapter one, you will learn the basics of cryptography using Java. Here, you will learn how to write a Java program to count Hash, MAC (Message Authentication Code), store keys in a KeyStore, generate PrivateKey and PublicKey, encrypt / decrypt data, and generate and verify digital prints. You will also learn how to create and store salt passwords and verify them. In chapter two, you will create a PostgreSQL database, named Bank, and its tables. In chapter three, you will create a Login table. In this case, you will see how to create a Java GUI using NetBeans to implement it. In addition to the Login table, in this chapter you will also create a Client table. In the case of the Client table, you will learn how to generate and save public and private keys into a database. You will also learn how to encrypt / decrypt data and save the results into a database. In chapter four, you will create an Account table. This account table has the following ten fields: account_id (primary key), client_id (primarykey), account_number, account_date, account_type, plain_balance, cipher_balance, decipher_balance, digital_signature, and signature_verification. In this case, you will learn how to implement generating and verifying digital prints and storing the results into a database. In chapter five, you create a table named Client_Data, which has seven columns: client_data_id (primary key), account_id (primary_key), birth_date, address, mother_name, telephone, and photo_path. In chapter six, you will be taught how to create a SQL Server database, named Crime, and its tables. In chapter seven, you will be taught how to extract image features, utilizing BufferedImage class, in Java GUI. In chapter eight, you will be taught to create Java GUI to view, edit, insert, and delete Suspect table data. This table has eleven columns: suspect_id (primary key), suspect_name, birth_date, case_date, report_date, suspect_ status, arrest_date, mother_name, address, telephone, and photo. In chapter nine, you will be taught to create Java GUI to view, edit, insert, and delete Feature_Extraction table data. This table has eight columns: feature_id (primary key), suspect_id (foreign key), feature1, feature2, feature3, feature4, feature5, and feature6. In chapter ten, you will add two tables: Police_Station and Investigator. These two tables will later be joined to Suspect table through another table, File_Case, which will be built in the seventh chapter. The Police_Station has six columns: police_station_id (primary key), location, city, province, telephone, and photo. The Investigator has eight columns: investigator_id (primary key), investigator_name, rank, birth_date, gender, address, telephone, and photo. Here, you will design a Java GUI to display, edit, fill, and delete data in both tables. In chapter eleven, you will add two tables: Victim and File_Case. The File_Case table will connect four other tables: Suspect, Police_Station, Investigator and Victim. The Victim table has nine columns: victim_id (primary key), victim_name, crime_type, birth_date, crime_date, gender, address, telephone, and photo. The File_Case has seven columns: file_case_id (primary key), suspect_id (foreign key), police_station_id (foreign key), investigator_id (foreign key), victim_id (foreign key), status, and description. Here, you will also design a Java GUI to display, edit, fill, and delete data in both tables. Finally, this book is hopefully useful and can improve database programming skills for every Java/MariaDB/SQL Server programmer.

Book Step By Step Java GUI With JDBC   MySQL   Practical approach to build database desktop application with project based examples

Download or read book Step By Step Java GUI With JDBC MySQL Practical approach to build database desktop application with project based examples written by Hamzan Wadi and published by TR Publisher. This book was released on with total page 340 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book comes as an answer for students, lecturers, or the general public who want to learn Java GUI programming starting from scratch. This book is suitable for beginner learners who want to learn Java GUI programming from the basic to the database level. This book is also present for JAVA learners who want to increase their level of making GUI-based database applications for small, medium, or corporate businesses level. The discussion in this book is not wordy and not theoretical. Each discussion in this book is presented in a concise and clear brief, and directly to the example that implements the discussion. Beginner learners who want to learn through this book should not be afraid of losing understanding of the programming concepts, because this book in detail discusses the concepts of Java programming from the basic to the advanced level. By applying the concept of learning by doing, this book will guide you step by step to start Java GUI programming from the basics until you are able to create database applications using JDBC and MySQL. Here are the material that you will learn in this book. CHAPTER 1 : This chapter will give you brief and clear introduction about how to create desktop application using Java GUI starting from how to setup your environments, create your first project, understand various control for your form, and understand how to interact with your form using event handling. CHAPTER 2 : This chapter will discuss clearly about the concept and the implementatiton of data types and variables in Java GUI. CHAPTER 3 : This chapter will discuss in detail about how to make decisions or deal with a condition in the program. This chapter is the first step to deeper understanding of logics in programming. This chapter specifically discusses relational operators and logical operators, if statements, if-else statements, and switch-case statements, and how to implement all of these conditional statements using Java GUI. CHAPTER 4 : This chapter will discuss in detail the looping statements in Java including for statement, while statement, do-while statement, break statement, and continue statement. All of these looping statements will be implemented using Java GUI. CHAPTER 5 : This chapter will discuss how to use methods to group codes based on their funcitonality. This discussion will also be the first step for programmers to learn how to create efficient program code. This chapter will discuss in detail the basics of methods, methods with return values, how to pass parameters to methods, how to overload your methods, and how to make recursive methods. CHAPTER 6 : This chapter will discuss in detail how to create and use arrays, read and write file operations, and how to display data stored in arrays or files in graphical form. CHAPTER 7 : This chapter will discuss in detail the basics of MySQL, how to access databases using JDBC and MySQL, and how to perform CRUD operations using JDBC and MySQL. CHAPTER 8 : In this chapter we will discuss more about Java GUI programming. This chapter will discuss in detail about how to make a program that consists of multi forms, how to create MDI application, and how to create report using iReport with data stored in a database.

Book JAVA GUI WITH POSTGRESQL  A Practical Approach to Build Database Project for Students and Programmers

Download or read book JAVA GUI WITH POSTGRESQL A Practical Approach to Build Database Project for Students and Programmers written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2019-08-21 with total page 307 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, you will learn how to build from scratch a PostgreSQL database management system using Java. In designing a GUI and as an IDE, you will make use of the NetBeans tool. Gradually and step by step, you will be taught how to utilize PostgreSQL in Java. In the first chapter, you will learn: How to install NetBeans, JDK 11, and the PostgreSQL connector; How to integrate external libraries into projects; How the basic PostgreSQL commands are used; How to query statements to create databases, create tables, fill tables, and manipulate table contents is done.In the first chapter, you will learn: How to install NetBeans, JDK 11, and the PostgreSQL connector; How to integrate external libraries into projects; How the basic PostgreSQL commands are used; How to query statements to create databases, create tables, fill tables, and manipulate table contents is done. In the second chapter, you will learn querying data from the postgresql using jdbc including establishing a database connection, creating a statement object, executing the query, processing the resultset object, querying data using a statement that returns multiple rows, querying data using a statement that has parameters, inserting data into a table using jdbc, updating data in postgresql database using jdbc, calling postgresql stored function using jdbc, deleting data from a postgresql table using jdbc, and postgresql jdbc transaction. In the third chapter, you will study: Creating the initial three table projects in the school database: Teacher table, TClass table, and Subject table; Creating database configuration files; Creating a Java GUI for viewing and navigating the contents of each table; Creating a Java GUI for inserting and editing tables; and Creating a Java GUI to join and query the three tables. In the fourth chapter, you will learn: Creating the main form to connect all forms; Creating a project will add three more tables to the school database: the Student table, the Parent table, and Tuition table; Creating a Java GUI to view and navigate the contents of each table; Creating a Java GUI for editing, inserting, and deleting records in each table; Creating a Java GUI to join and query the three tables and all six. In the last chapter, you will study how to query the six tables. Finally, this book is hopefully useful and can improve database programming skills for every Java/PostgreSQL programmer.

Book The Fast Way to Learn Java GUI with PostgreSQL and SQLite

Download or read book The Fast Way to Learn Java GUI with PostgreSQL and SQLite written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2020-01-15 with total page 493 pages. Available in PDF, EPUB and Kindle. Book excerpt: This step-by-step guide to explore database programming using Java is ideal for people with little or no programming experience. The goal of this concise book is not just to teach you Java, but to help you think like a programmer. Each brief chapter covers the material for one week of a college course to help you practice what you've learned. As you would expect, this book shows how to build from scratch two different databases: PostgreSQL and SQLite using Java. In designing a GUI and as an IDE, you will make use of the NetBeans tool. In the first chapter, you will learn: How to install NetBeans, JDK 11, and the PostgreSQL connector; How to integrate external libraries into projects; How the basic PostgreSQL commands are used; How to query statements to create databases, create tables, fill tables, and manipulate table contents is done.In the first chapter, you will learn: How to install NetBeans, JDK 11, and the PostgreSQL connector; How to integrate external libraries into projects; How the basic PostgreSQL commands are used; How to query statements to create databases, create tables, fill tables, and manipulate table contents is done. In the second chapter, you will learn querying data from the postgresql using jdbc including establishing a database connection, creating a statement object, executing the query, processing the resultset object, querying data using a statement that returns multiple rows, querying data using a statement that has parameters, inserting data into a table using jdbc, updating data in postgresql database using jdbc, calling postgresql stored function using jdbc, deleting data from a postgresql table using jdbc, and postgresql jdbc transaction. In chapter three, you will create a PostgreSQL database, named School, and its tables. In chapter four, you will study: Creating the initial three table projects in the school database: Teacher table, TClass table, and Subject table; Creating database configuration files; Creating a Java GUI for viewing and navigating the contents of each table; Creating a Java GUI for inserting and editing tables; and Creating a Java GUI to join and query the three tables. In chapter five, you will learn: Creating the main form to connect all forms; Creating a project will add three more tables to the school database: the Student table, the Parent table, and Tuition table; Creating a Java GUI to view and navigate the contents of each table; Creating a Java GUI for editing, inserting, and deleting records in each table; Creating a Java GUI to join and query the three tables and all six. In chapter six, you will study how to query the six tables. In chapter seven, you will be shown how to create SQLite database and tables with Java. In chapter eight, you will be taught how to extract image features, utilizing BufferedImage class, in Java GUI. Digital image techniques to extract image features used in this chapted are grascaling, sharpening, invertering, blurring, dilation, erosion, closing, opening, vertical prewitt, horizontal prewitt, Laplacian, horizontal sobel, and vertical sobel. For readers, you can develop it to store other advanced image features based on descriptors such as SIFT and others for developing descriptor based matching. In chapter nine, you will be taught to create Java GUI to view, edit, insert, and delete Suspect table data. This table has eleven columns: suspect_id (primary key), suspect_name, birth_date, case_date, report_date, suspect_ status, arrest_date, mother_name, address, telephone, and photo. In chapter ten, you will be taught to create Java GUI to view, edit, insert, and delete Feature_Extraction table data. This table has eight columns: feature_id (primary key), suspect_id (foreign key), feature1, feature2, feature3, feature4, feature5, and feature6. All six fields (except keys) will have a BLOB data type, so that the image of the feature will be directly saved into this table. In chapter eleven, you will add two tables: Police_Station and Investigator. These two tables will later be joined to Suspect table through another table, File_Case, which will be built in the seventh chapter. The Police_Station has six columns: police_station_id (primary key), location, city, province, telephone, and photo. The Investigator has eight columns: investigator_id (primary key), investigator_name, rank, birth_date, gender, address, telephone, and photo. Here, you will design a Java GUI to display, edit, fill, and delete data in both tables. In chapter twelve, you will add two tables: Victim and Case_File. The File_Case table will connect four other tables: Suspect, Police_Station, Investigator and Victim. The Victim table has nine columns: victim_id (primary key), victim_name, crime_type, birth_date, crime_date, gender, address, telephone, and photo. The Case_File has seven columns: case_file_id (primary key), suspect_id (foreign key), police_station_id (foreign key), investigator_id (foreign key), victim_id (foreign key), status, and description. Here, you will also design a Java GUI to display, edit, fill, and delete data in both tables. Finally, this book is hopefully useful and can improve database programming skills for every Java/PostgreSL/SQLite pogrammer.

Book The Quick Way to Learn Java GUI with MariaDB and SQLite

Download or read book The Quick Way to Learn Java GUI with MariaDB and SQLite written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2020-01-15 with total page 441 pages. Available in PDF, EPUB and Kindle. Book excerpt: This step-by-step guide to explore database programming using Java is ideal for people with little or no programming experience. The goal of this concise book is not just to teach you Java, but to help you think like a programmer. Each brief chapter covers the material for one week of a college course to help you practice what you've learned. As you would expect, this book shows how to build from scratch two different databases: MariaDB and SQLite using Java. In designing a GUI and as an IDE, you will make use of the NetBeans tool. In the first chapter, you will learn the basics of cryptography using Java. Here, you will learn how to write a Java program to count Hash, MAC (Message Authentication Code), store keys in a KeyStore, generate PrivateKey and PublicKey, encrypt / decrypt data, and generate and verify digital prints. In the second chapter, you will learn how to create and store salt passwords and verify them. You will create a Login table. In this case, you will see how to create a Java GUI using NetBeans to implement it. In addition to the Login table, in this chapter you will also create a Client table. In the case of the Client table, you will learn how to generate and save public and private keys into a database. You will also learn how to encrypt / decrypt data and save the results into a database. In the third chapter, you will create an Account table. This account table has the following ten fields: account_id (primary key), client_id (primarykey), account_number, account_date, account_type, plain_balance, cipher_balance, decipher_balance, digital_signature, and signature_verification. In this case, you will learn how to implement generating and verifying digital prints and storing the results into a database. In the fourth chapter, You create a table with the name of the Account, which has ten columns: account_id (primary key), client_id (primarykey), account_number, account_date, account_type, plain_balance, cipher_balance, decipher_balance, digital_signature, and signature_verification. In the fifth chapter, you will create a Client_Data table, which has the following seven fields: client_data_id (primary key), account_id (primary_key), birth_date, address, mother_name, telephone, and photo_path. In chapter six, you will be shown how to create SQLite database and tables with Java. In chapter seven, you will be taught how to extract image features, utilizing BufferedImage class, in Java GUI. Digital image techniques to extract image features used in this chapted are grascaling, sharpening, invertering, blurring, dilation, erosion, closing, opening, vertical prewitt, horizontal prewitt, Laplacian, horizontal sobel, and vertical sobel. For readers, you can develop it to store other advanced image features based on descriptors such as SIFT and others for developing descriptor based matching. In chapter eight, you will be taught to create Java GUI to view, edit, insert, and delete Suspect table data. This table has eleven columns: suspect_id (primary key), suspect_name, birth_date, case_date, report_date, suspect_ status, arrest_date, mother_name, address, telephone, and photo. In chapter nine, you will be taught to create Java GUI to view, edit, insert, and delete Feature_Extraction table data. This table has eight columns: feature_id (primary key), suspect_id (foreign key), feature1, feature2, feature3, feature4, feature5, and feature6. All six fields (except keys) will have a BLOB data type, so that the image of the feature will be directly saved into this table. In chapter ten, you will add two tables: Police_Station and Investigator. These two tables will later be joined to Suspect table through another table, File_Case, which will be built in the seventh chapter. The Police_Station has six columns: police_station_id (primary key), location, city, province, telephone, and photo. The Investigator has eight columns: investigator_id (primary key), investigator_name, rank, birth_date, gender, address, telephone, and photo. Here, you will design a Java GUI to display, edit, fill, and delete data in both tables. In chapter eleven, you will add two tables: Victim and Case_File. The File_Case table will connect four other tables: Suspect, Police_Station, Investigator and Victim. The Victim table has nine columns: victim_id (primary key), victim_name, crime_type, birth_date, crime_date, gender, address, telephone, and photo. The Case_File has seven columns: case_file_id (primary key), suspect_id (foreign key), police_station_id (foreign key), investigator_id (foreign key), victim_id (foreign key), status, and description. Here, you will also design a Java GUI to display, edit, fill, and delete data in both tables. Finally, this book is hopefully useful and can improve database programming skills for every Java/MariaDB/SQLite pogrammer.

Book Python GUI with SQL Server for Absolute Beginners

Download or read book Python GUI with SQL Server for Absolute Beginners written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2019-09-20 with total page 373 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book is SQL Server-based python programming. Microsoft SQL Server is robust relational database management system used by so many organizations of various sizes including top fortune 100 companies. SQL Server is a relational database management system (RDBMS) developed and marketed by Microsoft. As a database server, the primary function of the SQL Server is to store and retrieve data used by other applications. Deliberately designed for various levels of programming skill, this book is suitable for students, engineers, and even researchers in various disciplines. There is no need for advanced programming experience, and school-level programming skills are needed. In the first chapter, you will learn to use several widgets in PyQt5: Display a welcome message; Use the Radio Button widget; Grouping radio buttons; Displays options in the form of a check box; and Display two groups of check boxes. In chapter two, you will learn to use the following topics: Using Signal / Slot Editor; Copy and place text from one Line Edit widget to another; Convert data types and make a simple calculator; Use the Spin Box widget; Use scrollbars and sliders; Using the Widget List; Select a number of list items from one Widget List and display them on another Widget List widget; Add items to the Widget List; Perform operations on the Widget List; Use the Combo Box widget; Displays data selected by the user from the Calendar Widget; Creating a hotel reservation application; and Display tabular data using Table Widgets. In third chapter, you will learn: How to create the initial three tables project in the School database: Teacher, Class, and Subject tables; How to create database configuration files; How to create a Python GUI for inserting and editing tables; How to create a Python GUI to join and query the three tables. In fourth chapter, you will learn how to: Create a main form to connect all forms; Create a project will add three more tables to the school database: Student, Parent, and Tuition tables; Create a Python GUI for inserting and editing tables; Create a Python GUI to join and query over the three tables. In the last chapter, you will join the six classes, Teacher, TClass, Subject, Student, Parent, and Tuition and make queries over those tables.

Book Step by Step Tutorials On Deep Learning Using Scikit Learn  Keras  and Tensorflow with Python GUI

Download or read book Step by Step Tutorials On Deep Learning Using Scikit Learn Keras and Tensorflow with Python GUI written by Vivian Siahaan and published by BALIGE PUBLISHING. This book was released on 2023-06-18 with total page 324 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to implement deep learning on classifying fruits, classifying cats/dogs, detecting furnitures, and classifying fashion. In Chapter 1, you will learn to create GUI applications to display line graph using PyQt. You will also learn how to display image and its histogram. In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying fruits using Fruits 360 dataset provided by Kaggle (https://www.kaggle.com/moltean/fruits/code) using Transfer Learning and CNN models. You will build a GUI application for this purpose. Here's the outline of the steps, focusing on transfer learning: 1. Dataset Preparation: Download the Fruits 360 dataset from Kaggle. Extract the dataset files and organize them into appropriate folders for training and testing. Install the necessary libraries like TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, and NumPy; Data Preprocessing: Use OpenCV to read and load the fruit images from the dataset. Resize the images to a consistent size to feed them into the neural network. Convert the images to numerical arrays using NumPy. Normalize the image pixel values to a range between 0 and 1. Split the dataset into training and testing sets using Scikit-Learn. 3. Building the Model with Transfer Learning: Import the required modules from TensorFlow and Keras. Load a pre-trained model (e.g., VGG16, ResNet50, InceptionV3) without the top (fully connected) layers. Freeze the weights of the pre-trained layers to prevent them from being updated during training. Add your own fully connected layers on top of the pre-trained layers. Compile the model by specifying the loss function, optimizer, and evaluation metrics; 4. Model Training: Use the prepared training data to train the model. Specify the number of epochs and batch size for training. Monitor the training process for accuracy and loss using callbacks; 5. Model Evaluation: Evaluate the trained model on the test dataset using Scikit-Learn. Calculate accuracy, precision, recall, and F1-score for the classification results; 6. Predictions: Load and preprocess new fruit images for prediction using the same steps as in data preprocessing. Use the trained model to predict the class labels of the new images. In Chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying cats/dogs using dataset provided by Kaggle (https://www.kaggle.com/chetankv/dogs-cats-images) using Using CNN with Data Generator. You will build a GUI application for this purpose. The following steps are taken: Set up your development environment: Install the necessary libraries such as TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy, and any other dependencies required for the tutorial; Load and preprocess the dataset: Use libraries like OpenCV and NumPy to load and preprocess the dataset. Split the dataset into training and testing sets; Design and train the classification model: Use TensorFlow and Keras to design a convolutional neural network (CNN) model for image classification. Define the architecture of the model, compile it with an appropriate loss function and optimizer, and train it using the training dataset; Evaluate the model: Evaluate the trained model using the testing dataset. Calculate metrics such as accuracy, precision, recall, and F1 score to assess the model's performance; Make predictions: Use the trained model to make predictions on new unseen images. Preprocess the images, feed them into the model, and obtain the predicted class labels; Visualize the results: Use libraries like Matplotlib or OpenCV to visualize the results, such as displaying sample images with their predicted labels, plotting the training/validation loss and accuracy curves, and creating a confusion matrix. In Chapter 4, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting furnitures using Furniture Detector dataset provided by Kaggle (https://www.kaggle.com/akkithetechie/furniture-detector) using VGG16 model. You will build a GUI application for this purpose. Here are the steps you can follow to perform furniture detection: Dataset Preparation: Extract the dataset files and organize them into appropriate directories for training and testing; Data Preprocessing: Load the dataset using Pandas to analyze and preprocess the data. Explore the dataset to understand its structure, features, and labels. Perform any necessary preprocessing steps like resizing images, normalizing pixel values, and splitting the data into training and testing sets; Feature Extraction and Representation: Use OpenCV or any image processing libraries to extract meaningful features from the images. This might include techniques like edge detection, color-based features, or texture analysis. Convert the images and extracted features into a suitable representation for machine learning models. This can be achieved using NumPy arrays or other formats compatible with the chosen libraries; Model Training: Define a deep learning model using TensorFlow and Keras for furniture detection. You can choose pre-trained models like VGG16, ResNet, or custom architectures. Compile the model with an appropriate loss function, optimizer, and evaluation metrics. Train the model on the preprocessed dataset using the training set. Adjust hyperparameters like batch size, learning rate, and number of epochs to improve performance; Model Evaluation: Evaluate the trained model using the testing set. Calculate metrics such as accuracy, precision, recall, and F1 score to assess the model's performance. Analyze the results and identify areas for improvement; Model Deployment and Inference: Once satisfied with the model's performance, save it to disk for future use. Deploy the model to make predictions on new, unseen images. Use the trained model to perform furniture detection on images by applying it to the test set or new data. In Chapter 5, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying fashion using Fashion MNIST dataset provided by Kaggle (https://www.kaggle.com/zalando-research/fashionmnist/code) using CNN model. You will build a GUI application for this purpose. Here are the general steps to implement image classification using the Fashion MNIST dataset: Import the necessary libraries: Import the required libraries such as TensorFlow, Keras, NumPy, Pandas, and Matplotlib for handling the dataset, building the model, and visualizing the results; Load and preprocess the dataset: Load the Fashion MNIST dataset, which consists of images of clothing items. Split the dataset into training and testing sets. Preprocess the images by scaling the pixel values to a range of 0 to 1 and converting the labels to categorical format; Define the model architecture: Create a convolutional neural network (CNN) model using Keras. The CNN consists of convolutional layers, pooling layers, and fully connected layers. Choose the appropriate architecture based on the complexity of the dataset; Compile the model: Specify the loss function, optimizer, and evaluation metric for the model. Common choices include categorical cross-entropy for multi-class classification and Adam optimizer; Train the model: Fit the model to the training data using the fit() function. Specify the number of epochs (iterations) and batch size. Monitor the training progress by tracking the loss and accuracy; Evaluate the model: Evaluate the trained model using the test dataset. Calculate the accuracy and other performance metrics to assess the model's performance; Make predictions: Use the trained model to make predictions on new unseen images. Load the test images, preprocess them, and pass them through the model to obtain class probabilities or predictions; Visualize the results: Visualize the training progress by plotting the loss and accuracy curves. Additionally, you can visualize the predictions and compare them with the true labels to gain insights into the model's performance.

Book A PROGRESSIVE TUTORIAL TO DATABASE PROGRAMMING WITH PYTHON GUI AND POSTGRESQL

Download or read book A PROGRESSIVE TUTORIAL TO DATABASE PROGRAMMING WITH PYTHON GUI AND POSTGRESQL written by Vivian Siahaan and published by SPARTA PUBLISHING. This book was released on 2020-01-03 with total page 537 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, you will create two desktop applications using Python GUI and PostgreSQL. This book is a Python/PostgreSQL version of the Python/MySQL book which was written by the author. What underlies the writing of this book is the growing popularity of the PostgreSQL database server lately and more and more programmers migrating from MySQL to PostgreSQL. In this book, you will learn to build a school database project, step by step. A number of widgets from PyQt will be used for the user interface. In the first and second chapter, you will get introduction of postgresql. And then, you will learn querying data from the postgresql using Python including establishing a database connection, creating a statement object, executing the query, processing the resultset object, querying data using a statement that returns multiple rows, querying data using a statement that has parameters, inserting data into a table using Python, updating data in postgresql database using Python, calling postgresql stored function using Python, deleting data from a postgresql table using Python, and postgresql Python transaction. In the fourth chapter, you will study: Creating the initial three table in the School database project: Teacher table, Class table, and Subject table; Creating database configuration files; Creating a Python GUI for viewing and navigating the contents of each table. Creating a Python GUI for inserting and editing tables; and Creating a Python GUI to merge and query the three tables. In chapter five, you will learn: Creating the main form to connect all forms; Creating a project that will add three more tables to the school database: the Student table, the Parent table, and the Tuition table; Creating a Python GUI to view and navigate the contents of each table; Creating a Python GUI for editing, inserting, and deleting records in each table; Create a Python GUI to merge and query the three tables and all six tables. In chapter six, you will create dan configure PotgreSQL database. In this chapter, you will create Suspect table in crime database. This table has eleven columns: suspect_id (primary key), suspect_name, birth_date, case_date, report_date, suspect_ status, arrest_date, mother_name, address, telephone, and photo. You will also create GUI to display, edit, insert, and delete for this table. In chapter seven, you will create a table with the name Feature_Extraction, which has eight columns: feature_id (primary key), suspect_id (foreign key), feature1, feature2, feature3, feature4, feature5, and feature6. The six fields (except keys) will have a VARCHAR data type (200). You will also create GUI to display, edit, insert, and delete for this table. In chapter eight, you will create two tables, Police and Investigator. The Police table has six columns: police_id (primary key), province, city, address, telephone, and photo. The Investigator table has eight columns: investigator_id (primary key), investigator_name, rank, birth_date, gender, address, telephone, and photo. You will also create GUI to display, edit, insert, and delete for both tables. In chapter nine, you will create two tables, Victim and Case_File. The Victim table has nine columns: victim_id (primary key), victim_name, crime_type, birth_date, crime_date, gender, address, telephone, and photo. The Case_File table has seven columns: case_file_id (primary key), suspect_id (foreign key), police_id (foreign key), investigator_id (foreign key), victim_id (foreign key), status, and description. You will create GUI to display, edit, insert, and delete for both tables as well.

Book PYTHON GUI PROJECTS WITH MACHINE LEARNING AND DEEP LEARNING

Download or read book PYTHON GUI PROJECTS WITH MACHINE LEARNING AND DEEP LEARNING written by Vivian Siahaan and published by BALIGE PUBLISHING. This book was released on 2022-01-16 with total page 917 pages. Available in PDF, EPUB and Kindle. Book excerpt: PROJECT 1: THE APPLIED DATA SCIENCE WORKSHOP: Prostate Cancer Classification and Recognition Using Machine Learning and Deep Learning with Python GUI Prostate cancer is cancer that occurs in the prostate. The prostate is a small walnut-shaped gland in males that produces the seminal fluid that nourishes and transports sperm. Prostate cancer is one of the most common types of cancer. Many prostate cancers grow slowly and are confined to the prostate gland, where they may not cause serious harm. However, while some types of prostate cancer grow slowly and may need minimal or even no treatment, other types are aggressive and can spread quickly. The dataset used in this project consists of 100 patients which can be used to implement the machine learning and deep learning algorithms. The dataset consists of 100 observations and 10 variables (out of which 8 numeric variables and one categorical variable and is ID) which are as follows: Id, Radius, Texture, Perimeter, Area, Smoothness, Compactness, Diagnosis Result, Symmetry, and Fractal Dimension. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. PROJECT 2: THE APPLIED DATA SCIENCE WORKSHOP: Urinary Biomarkers Based Pancreatic Cancer Classification and Prediction Using Machine Learning with Python GUI Pancreatic cancer is an extremely deadly type of cancer. Once diagnosed, the five-year survival rate is less than 10%. However, if pancreatic cancer is caught early, the odds of surviving are much better. Unfortunately, many cases of pancreatic cancer show no symptoms until the cancer has spread throughout the body. A diagnostic test to identify people with pancreatic cancer could be enormously helpful. In a paper by Silvana Debernardi and colleagues, published this year in the journal PLOS Medicine, a multi-national team of researchers sought to develop an accurate diagnostic test for the most common type of pancreatic cancer, called pancreatic ductal adenocarcinoma or PDAC. They gathered a series of biomarkers from the urine of three groups of patients: Healthy controls, Patients with non-cancerous pancreatic conditions, like chronic pancreatitis, and Patients with pancreatic ductal adenocarcinoma. When possible, these patients were age- and sex-matched. The goal was to develop an accurate way to identify patients with pancreatic cancer. The key features are four urinary biomarkers: creatinine, LYVE1, REG1B, and TFF1. Creatinine is a protein that is often used as an indicator of kidney function. YVLE1 is lymphatic vessel endothelial hyaluronan receptor 1, a protein that may play a role in tumor metastasis. REG1B is a protein that may be associated with pancreas regeneration. TFF1 is trefoil factor 1, which may be related to regeneration and repair of the urinary tract. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, and MLP classifier. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. PROJECT 3: DATA SCIENCE CRASH COURSE: Voice Based Gender Classification and Prediction Using Machine Learning and Deep Learning with Python GUI This dataset was created to identify a voice as male or female, based upon acoustic properties of the voice and speech. The dataset consists of 3,168 recorded voice samples, collected from male and female speakers. The voice samples are pre-processed by acoustic analysis in R using the seewave and tuneR packages, with an analyzed frequency range of 0hz-280hz (human vocal range). The following acoustic properties of each voice are measured and included within the CSV: meanfreq: mean frequency (in kHz); sd: standard deviation of frequency; median: median frequency (in kHz); Q25: first quantile (in kHz); Q75: third quantile (in kHz); IQR: interquantile range (in kHz); skew: skewness; kurt: kurtosis; sp.ent: spectral entropy; sfm: spectral flatness; mode: mode frequency; centroid: frequency centroid (see specprop); peakf: peak frequency (frequency with highest energy); meanfun: average of fundamental frequency measured across acoustic signal; minfun: minimum fundamental frequency measured across acoustic signal; maxfun: maximum fundamental frequency measured across acoustic signal; meandom: average of dominant frequency measured across acoustic signal; mindom: minimum of dominant frequency measured across acoustic signal; maxdom: maximum of dominant frequency measured across acoustic signal; dfrange: range of dominant frequency measured across acoustic signal; modindx: modulation index. Calculated as the accumulated absolute difference between adjacent measurements of fundamental frequencies divided by the frequency range; and label: male or female. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. PROJECT 4: DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Thyroid disease is a general term for a medical condition that keeps your thyroid from making the right amount of hormones. Thyroid typically makes hormones that keep body functioning normally. When the thyroid makes too much thyroid hormone, body uses energy too quickly. The two main types of thyroid disease are hypothyroidism and hyperthyroidism. Both conditions can be caused by other diseases that impact the way the thyroid gland works. Dataset used in this project was from Garavan Institute Documentation as given by Ross Quinlan 6 databases from the Garavan Institute in Sydney, Australia. Approximately the following for each database: 2800 training (data) instances and 972 test instances. This dataset contains plenty of missing data, while 29 or so attributes, either Boolean or continuously-valued. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy.

Book THREE BOOKS IN ONE  Deep Learning Using SCIKIT LEARN  KERAS  and TENSORFLOW with Python GUI

Download or read book THREE BOOKS IN ONE Deep Learning Using SCIKIT LEARN KERAS and TENSORFLOW with Python GUI written by Vivian Siahaan and published by BALIGE PUBLISHING. This book was released on 2021-05-20 with total page 588 pages. Available in PDF, EPUB and Kindle. Book excerpt: BOOK 1: THE PRACTICAL GUIDES ON DEEP LEARNING USING SCIKIT-LEARN, KERAS, AND TENSORFLOW WITH PYTHON GUI In this book, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to implement deep learning on recognizing traffic signs using GTSRB dataset, detecting brain tumor using Brain Image MRI dataset, classifying gender, and recognizing facial expression using FER2013 dataset In Chapter 1, you will learn to create GUI applications to display line graph using PyQt. You will also learn how to display image and its histogram. In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, Pandas, NumPy and other libraries to perform prediction on handwritten digits using MNIST dataset with PyQt. You will build a GUI application for this purpose. In Chapter 3, you will learn how to perform recognizing traffic signs using GTSRB dataset from Kaggle. There are several different types of traffic signs like speed limits, no entry, traffic signals, turn left or right, children crossing, no passing of heavy vehicles, etc. Traffic signs classification is the process of identifying which class a traffic sign belongs to. In this Python project, you will build a deep neural network model that can classify traffic signs in image into different categories. With this model, you will be able to read and understand traffic signs which are a very important task for all autonomous vehicles. You will build a GUI application for this purpose. In Chapter 4, you will learn how to perform detecting brain tumor using Brain Image MRI dataset provided by Kaggle (https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection) using CNN model. You will build a GUI application for this purpose. In Chapter 5, you will learn how to perform classifying gender using dataset provided by Kaggle (https://www.kaggle.com/cashutosh/gender-classification-dataset) using MobileNetV2 and CNN models. You will build a GUI application for this purpose. In Chapter 6, you will learn how to perform recognizing facial expression using FER2013 dataset provided by Kaggle (https://www.kaggle.com/nicolejyt/facialexpressionrecognition) using CNN model. You will also build a GUI application for this purpose. BOOK 2: STEP BY STEP TUTORIALS ON DEEP LEARNING USING SCIKIT-LEARN, KERAS, AND TENSORFLOW WITH PYTHON GUI In this book, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to implement deep learning on classifying fruits, classifying cats/dogs, detecting furnitures, and classifying fashion. In Chapter 1, you will learn to create GUI applications to display line graph using PyQt. You will also learn how to display image and its histogram. Then, you will learn how to use OpenCV, NumPy, and other libraries to perform feature extraction with Python GUI (PyQt). The feature detection techniques used in this chapter are Harris Corner Detection, Shi-Tomasi Corner Detector, and Scale-Invariant Feature Transform (SIFT). In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying fruits using Fruits 360 dataset provided by Kaggle (https://www.kaggle.com/moltean/fruits/code) using Transfer Learning and CNN models. You will build a GUI application for this purpose. In Chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying cats/dogs using dataset provided by Kaggle (https://www.kaggle.com/chetankv/dogs-cats-images) using Using CNN with Data Generator. You will build a GUI application for this purpose. In Chapter 4, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting furnitures using Furniture Detector dataset provided by Kaggle (https://www.kaggle.com/akkithetechie/furniture-detector) using VGG16 model. You will build a GUI application for this purpose. In Chapter 5, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying fashion using Fashion MNIST dataset provided by Kaggle (https://www.kaggle.com/zalando-research/fashionmnist/code) using CNN model. You will build a GUI application for this purpose. BOOK 3: PROJECT-BASED APPROACH ON DEEP LEARNING USING SCIKIT-LEARN, KERAS, AND TENSORFLOW WITH PYTHON GUI In this book, implement deep learning on detecting vehicle license plates, recognizing sign language, and detecting surface crack using TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries. In Chapter 1, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting vehicle license plates using Car License Plate Detection dataset provided by Kaggle (https://www.kaggle.com/andrewmvd/car-plate-detection/download). In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform sign language recognition using Sign Language Digits Dataset provided by Kaggle (https://www.kaggle.com/ardamavi/sign-language-digits-dataset/download). In Chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting surface crack using Surface Crack Detection provided by Kaggle (https://www.kaggle.com/arunrk7/surface-crack-detection/download).

Book BRAIN TUMOR  Analysis  Classification  and Detection Using Machine Learning and Deep Learning with Python GUI

Download or read book BRAIN TUMOR Analysis Classification and Detection Using Machine Learning and Deep Learning with Python GUI written by Vivian Siahaan and published by BALIGE PUBLISHING. This book was released on 2023-06-24 with total page 332 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, you will learn how to use Scikit-Learn, TensorFlow, Keras, NumPy, Pandas, Seaborn, and other libraries to implement brain tumor classification and detection with machine learning using Brain Tumor dataset provided by Kaggle. this dataset contains five first order features: Mean (the contribution of individual pixel intensity for the entire image), Variance (used to find how each pixel varies from the neighboring pixel 0, Standard Deviation (the deviation of measured Values or the data from its mean), Skewness (measures of symmetry), and Kurtosis (describes the peak of e.g. a frequency distribution). it also contains eight second order features: Contrast, Energy, ASM (Angular second moment), Entropy, Homogeneity, Dissimilarity, Correlation, and Coarseness. In this project, various methods and functionalities related to machine learning and deep learning are covered. Here is a summary of the process: Data Preprocessing: Loaded and preprocessed the dataset using various techniques such as feature scaling, encoding categorical variables, and splitting the dataset into training and testing sets.; Feature Selection: Implemented feature selection techniques such as SelectKBest, Recursive Feature Elimination, and Principal Component Analysis to select the most relevant features for the model.; Model Training and Evaluation: Trained and evaluated multiple machine learning models such as Random Forest, AdaBoost, Gradient Boosting, Logistic Regression, and Support Vector Machines using cross-validation and hyperparameter tuning. Implemented ensemble methods like Voting Classifier and Stacking Classifier to combine the predictions of multiple models. Calculated evaluation metrics such as accuracy, precision, recall, F1-score, and mean squared error for each model. Visualized the predictions and confusion matrix for the models using plotting techniques.; Deep Learning Model Building and Training: Built deep learning models using architectures such as MobileNet and ResNet50 for image classification tasks. Compiled and trained the models using appropriate loss functions, optimizers, and metrics. Saved the trained models and their training history for future use.; Visualization and Interaction: Implemented methods to plot the training loss and accuracy curves during model training. Created interactive widgets for displaying prediction results and confusion matrices. Linked the selection of prediction options in combo boxes to trigger the corresponding prediction and visualization functions.; Throughout the process, various libraries and frameworks such as scikit-learn, TensorFlow, and Keras are used to perform the tasks efficiently. The overall goal was to train models, evaluate their performance, visualize the results, and provide an interactive experience for the user to explore different prediction options.

Book Project Based Approach On DEEP LEARNING Using Scikit Learn  Keras  And TensorFlow with Python GUI

Download or read book Project Based Approach On DEEP LEARNING Using Scikit Learn Keras And TensorFlow with Python GUI written by Vivian Siahaan and published by BALIGE PUBLISHING. This book was released on 2023-06-19 with total page 224 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, implement deep learning on detecting vehicle license plates, recognizing sign language, and detecting surface crack using TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries. In chapter 1, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting vehicle license plates using Car License Plate Detection dataset provided by Kaggle (https://www.kaggle.com/andrewmvd/car-plate-detection/download). To perform license plate detection, these steps are taken: 1. Dataset Preparation: Extract the dataset and organize it into separate folders for images and annotations. The annotations should contain bounding box coordinates for license plate regions.; 2. Data Preprocessing: Load the images and annotations from the dataset. Preprocess the images by resizing, normalizing, or applying any other necessary transformations. Convert the annotation bounding box coordinates to the appropriate format for training.; 3. Training Data Generation: Divide the dataset into training and validation sets. Generate training data by augmenting the images and annotations (e.g., flipping, rotating, zooming). Create data generators or data loaders to efficiently load the training data.; 4. Model Development: Choose a suitable deep learning model architecture for license plate detection, such as a convolutional neural network (CNN). Use TensorFlow and Keras to develop the model architecture. Compile the model with appropriate loss functions and optimization algorithms.; 5. Model Training: Train the model using the prepared training data. Monitor the training process by tracking metrics like loss and accuracy. Adjust the hyperparameters or model architecture as needed to improve performance.; 6. Model Evaluation: Evaluate the trained model using the validation set. Calculate relevant metrics like precision, recall, and F1 score. Make any necessary adjustments to the model based on the evaluation results.; 7. License Plate Detection: Use the trained model to detect license plates in new images. Apply any post-processing techniques to refine the detected regions. Extract the license plate regions and further process them if needed. In chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform sign language recognition using Sign Language Digits Dataset. Here are the steps to perform sign language recognition using the Sign Language Digits Dataset: 1. Download the dataset from Kaggle: You can visit the Kaggle Sign Language Digits Dataset page (https://www.kaggle.com/ardamavi/sign-language-digits-dataset) and download the dataset.; 2. Extract the dataset: After downloading the dataset, extract the contents from the downloaded zip file to a suitable location on your local machine.; 3.Load the dataset: The dataset consists of two parts - images and a CSV file containing the corresponding labels. The images are stored in a folder, and the CSV file contains the image paths and labels.; 4. Preprocess the dataset: Depending on the specific requirements of your model, you may need to preprocess the dataset. This can include tasks such as resizing images, converting labels to numerical format, normalizing pixel values, or splitting the dataset into training and testing sets.; 5. Build a machine learning model: Use libraries such as TensorFlow and Keras to build a sign language recognition model. This typically involves designing the architecture of the model, compiling it with suitable loss functions and optimizers, and training the model on the preprocessed dataset.; 6. Evaluate the model: After training the model, evaluate its performance using appropriate evaluation metrics. This can help you understand how well the model is performing on the sign language recognition task.; 7. Make predictions: Once the model is trained and evaluated, you can use it to make predictions on new sign language images. Pass the image through the model, and it will predict the corresponding sign language digit. In chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting surface crack using Surface Crack Detection provided by Kaggle (https://www.kaggle.com/arunrk7/surface-crack-detection/download). Here's a general outline of the process: Data Preparation: Start by downloading the dataset from the Kaggle link you provided. Extract the dataset and organize it into appropriate folders (e.g., training and testing folders).; Import Libraries: Begin by importing the necessary libraries, including TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, and NumPy.; Data Loading and Preprocessing: Load the images and labels from the dataset. Since the dataset may come in different formats, it's essential to understand its structure and adjust the code accordingly. Use OpenCV to read the images and Pandas to load the labels.; Data Augmentation: Perform data augmentation techniques such as rotation, flipping, and scaling to increase the diversity of the training data and prevent overfitting. You can use the ImageDataGenerator class from Keras for this purpose.; Model Building: Define your neural network architecture using the Keras API with TensorFlow backend. You can start with a simple architecture like a convolutional neural network (CNN). Experiment with different architectures to achieve better performance.; Model Compilation: Compile your model by specifying the loss function, optimizer, and evaluation metric. For a binary classification problem like crack detection, you can use binary cross-entropy as the loss function and Adam as the optimizer.; Model Training: Train your model on the prepared dataset using the fit() method. Split your data into training and validation sets using train_test_split() from Scikit-Learn. Monitor the training progress and adjust hyperparameters as needed. Model Evaluation: Evaluate the performance of your trained model on the test set. Use appropriate evaluation metrics such as accuracy, precision, recall, and F1 score. Scikit-Learn provides functions for calculating these metrics.; Model Prediction: Use the trained model to predict crack detection on new unseen images. Load the test images, preprocess them if necessary, and use the trained model to make predictions.

Book Step by Step Tutorial IMAGE CLASSIFICATION Using Scikit Learn  Keras  And TensorFlow with PYTHON GUI

Download or read book Step by Step Tutorial IMAGE CLASSIFICATION Using Scikit Learn Keras And TensorFlow with PYTHON GUI written by Vivian Siahaan and published by BALIGE PUBLISHING. This book was released on 2023-06-21 with total page 211 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, implement deep learning-based image classification on classifying monkey species, recognizing rock, paper, and scissor, and classify airplane, car, and ship using TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries. In chapter 1, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to classify monkey species using 10 Monkey Species dataset provided by Kaggle (https://www.kaggle.com/slothkong/10-monkey-species/download). Here's an overview of the steps involved in classifying monkey species using the 10 Monkey Species dataset: Dataset Preparation: Download the 10 Monkey Species dataset from Kaggle and extract the files. The dataset should consist of separate folders for each monkey species, with corresponding images.; Load and Preprocess Images: Use libraries such as OpenCV to load the images from the dataset. Resize the images to a consistent size (e.g., 224x224 pixels) to ensure uniformity.; Split the Dataset: Divide the dataset into training and testing sets. Typically, an 80:20 or 70:30 split is used, where the larger portion is used for training and the smaller portion for testing the model's performance.; Label Encoding: Encode the categorical labels (monkey species) into numeric form. This step is necessary to train a machine learning model, as most algorithms expect numerical inputs.; Feature Extraction: Extract meaningful features from the images using techniques like deep learning or image processing algorithms. This step helps in representing the images in a format that the machine learning model can understand.; Model Training: Use libraries like TensorFlow and Keras to train a machine learning model on the preprocessed data. Choose an appropriate model architecture, in this case, MobileNetV2.; Model Evaluation: Evaluate the trained model on the testing set to assess its performance. Metrics like accuracy, precision, recall, and F1-score can be used to evaluate the model's classification performance.; Predictions: Use the trained model to make predictions on new, unseen images. Pass the images through the trained model and obtain the predicted labels for the monkey species. In chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to recognize rock, paper, and scissor using dataset provided by Kaggle (https://www.kaggle.com/sanikamal/rock-paper-scissors-dataset/download). Here's the outline of the steps: Step 1: Dataset Preparation: Download the rock-paper-scissors dataset from Kaggle by visiting the provided link and clicking on the "Download" button. Save the dataset to a local directory on your machine. Extract the downloaded dataset to a suitable location. This will create a folder containing the images for rock, paper, and scissors.; Step 2: Data Preprocessing: Import the required libraries: TensorFlow, Keras, NumPy, OpenCV, and Pandas. Load the dataset using OpenCV: Iterate through the image files in the dataset directory and use OpenCV's cv2.imread() function to load each image. You can specify the image's file extension (e.g., PNG) and directory path. Preprocess the images: Resize the loaded images to a consistent size using OpenCV's cv2.resize() function. You may choose a specific width and height suitable for your model. Prepare the labels: Create a list or array to store the corresponding labels for each image (rock, paper, or scissors). This can be done based on the file naming convention or by mapping images to their respective labels using a dictionary.; Step 3: Model Training: Create a convolutional neural network (CNN) model using Keras: Define a CNN architecture using Keras' Sequential model or functional API. This typically consists of convolutional layers, pooling layers, and dense layers. Compile the model: Specify the loss function (e.g., categorical cross-entropy) and optimizer (e.g., Adam) using Keras' compile() function. You can also define additional metrics to evaluate the model's performance. Train the model: Use Keras' fit() function to train the model on the preprocessed dataset. Specify the training data, labels, batch size, number of epochs, and validation data if available. This will optimize the model's weights based on the provided dataset. Save the trained model: Once the model training is complete, you can save the trained model to disk using Keras' save() or save_weights() function. This allows you to load the model later for predictions or further training.; Step 4: Model Evaluation: Evaluate the trained model: Use Keras' evaluate() function to assess the model's performance on a separate testing dataset. Provide the testing data and labels to calculate metrics such as accuracy, precision, recall, and F1 score. This will help you understand how well the model generalizes to new, unseen data. Analyze the model's performance: Interpret the evaluation metrics and analyze any potential areas of improvement. You can also visualize the confusion matrix or classification report to gain more insights into the model's predictions.; Step 5: Prediction: Use the trained model for predictions: Load the saved model using Keras' load_model() function. Then, pass new, unseen images through the model to obtain predictions. Preprocess these images in the same way as the training images (resize, normalize, etc.). Visualize and interpret predictions: Display the predicted labels alongside the corresponding images to see how well the model performs. You can use libraries like Matplotlib or OpenCV to show the images and their predicted labels. Additionally, you can calculate the accuracy of the model's predictions on the new dataset. In chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to classify airplane, car, and ship using Multiclass-image-dataset-airplane-car-ship dataset provided by Kaggle (https://www.kaggle.com/abtabm/multiclassimagedatasetairplanecar). Here are the outline steps: Import the required libraries: TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy. Load and preprocess the dataset: Read the images from the dataset folder. Resize the images to a fixed size. Store the images and corresponding labels.; Split the dataset into training and testing sets: Split the data and labels into training and testing sets using a specified ratio.; Encode the labels: Convert the categorical labels into numerical format. Perform one-hot encoding on the labels.; Build MobileNetV2 model using Keras: Create a sequential model. Add convolutional layers with activation functions. Add pooling layers for downsampling. Flatten the output and add dense layers. Set the output layer with softmax activation.; Compile and train the model: Compile the model with an optimizer and loss function. Train the model using the training data and labels. Specify the number of epochs and batch size.; Evaluate the model: Evaluate the trained model using the testing data and labels. Calculate the accuracy of the model.; Make predictions on new images: Load and preprocess a new image. Use the trained model to predict the label of the new image. Convert the predicted label from numerical format to categorical.

Book Hands On Guide To IMAGE CLASSIFICATION Using Scikit Learn  Keras  And TensorFlow with PYTHON GUI

Download or read book Hands On Guide To IMAGE CLASSIFICATION Using Scikit Learn Keras And TensorFlow with PYTHON GUI written by Vivian Siahaan and published by BALIGE PUBLISHING. This book was released on 2023-06-20 with total page 210 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this book, implement deep learning on detecting face mask, classifying weather, and recognizing flower using TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries. In chapter 1, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting face mask using Face Mask Detection Dataset provided by Kaggle (https://www.kaggle.com/omkargurav/face-mask-dataset/download). Here's an overview of the steps involved in detecting face masks using the Face Mask Detection Dataset: Import the necessary libraries: Import the required libraries like TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, and NumPy.; Load and preprocess the dataset: Load the dataset and perform any necessary preprocessing steps, such as resizing images and converting labels into numeric representations.; Split the dataset: Split the dataset into training and testing sets using the train_test_split function from Scikit-Learn. This will allow us to evaluate the model's performance on unseen data.; Data augmentation (optional): Apply data augmentation techniques to artificially increase the size and diversity of the training set. Techniques like rotation, zooming, and flipping can help improve the model's generalization.; Build the model: Create a Convolutional Neural Network (CNN) model using TensorFlow and Keras. Design the architecture of the model, including the number and type of layers.; Compile the model: Compile the model by specifying the loss function, optimizer, and evaluation metrics. This prepares the model for training. Train the model: Train the model on the training dataset. Adjust the hyperparameters, such as the learning rate and number of epochs, to achieve optimal performance.; Evaluate the model: Evaluate the trained model on the testing dataset to assess its performance. Calculate metrics such as accuracy, precision, recall, and F1 score.; Make predictions: Use the trained model to make predictions on new images or video streams. Apply the face mask detection algorithm to identify whether a person is wearing a mask or not.; Visualize the results: Visualize the predictions by overlaying bounding boxes or markers on the images or video frames to indicate the presence or absence of face masks. In chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to classify weather using Multi-class Weather Dataset provided by Kaggle (https://www.kaggle.com/pratik2901/multiclass-weather-dataset/download). To classify weather using the Multi-class Weather Dataset from Kaggle, you can follow these general steps: Load the dataset: Use libraries like Pandas or NumPy to load the dataset into memory. Explore the dataset to understand its structure and the available features.; Preprocess the data: Perform necessary preprocessing steps such as data cleaning, handling missing values, and feature engineering. This may include resizing images (if the dataset contains images) or encoding categorical variables.; Split the data: Split the dataset into training and testing sets. The training set will be used to train the model, and the testing set will be used for evaluating its performance.; Build a model: Utilize TensorFlow and Keras to define a suitable model architecture for weather classification. The choice of model depends on the type of data you have. For image data, convolutional neural networks (CNNs) often work well.; Train the model: Train the model using the training data. Use appropriate training techniques like gradient descent and backpropagation to optimize the model's weights.; Evaluate the model: Evaluate the trained model's performance using the testing data. Calculate metrics such as accuracy, precision, recall, or F1-score to assess how well the model performs.; Fine-tune the model: If the model's performance is not satisfactory, you can experiment with different hyperparameters, architectures, or regularization techniques to improve its performance. This process is called model tuning.; Make predictions: Once you are satisfied with the model's performance, you can use it to make predictions on new, unseen data. Provide the necessary input (e.g., an image or weather features) to the trained model, and it will predict the corresponding weather class. In chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to recognize flower using Flowers Recognition dataset provided by Kaggle (https://www.kaggle.com/alxmamaev/flowers-recognition/download). Here are the general steps involved in recognizing flowers: Data Preparation: Download the Flowers Recognition dataset from Kaggle and extract the contents. Import the required libraries and define the dataset path and image dimensions.; Loading and Preprocessing the Data: Load the images and their corresponding labels from the dataset. Resize the images to a specific dimension. Perform label encoding on the flower labels and split the data into training and testing sets. Normalize the pixel values of the images.; Building the Model: Define the architecture of your model using TensorFlow's Keras API. You can choose from various neural network architectures such as CNNs, ResNet, or InceptionNet. The model architecture should be designed to handle image inputs and output the predicted flower class..; Compiling and Training the Model: Compile the model by specifying the loss function, optimizer, and evaluation metrics. Common choices include categorical cross-entropy loss and the Adam optimizer. Train the model using the training set and validate it using the testing set. Adjust the hyperparameters, such as the learning rate and number of epochs, to improve performance.; Model Evaluation: Evaluate the trained model on the testing set to measure its performance. Calculate metrics such as accuracy, precision, recall, and F1-score to assess how well the model is recognizing flower classes.; Prediction: Use the trained model to predict the flower class for new images. Load and preprocess the new images in a similar way to the training data. Pass the preprocessed images through the trained model and obtain the predicted flower class labels.; Further Improvements: If the model's performance is not satisfactory, consider experimenting with different architectures, hyperparameters, or techniques such as data augmentation or transfer learning. Fine-tuning the model or using ensembles of models can also improve accuracy.