Tuesday, March 08, 2022

An Introduction to Key Vault Monitoring and Alerts Approach

 



Key Vault Monitoring and Alerts Approach Using Azure Event Grid and Azure Automation Account


We know Azure Key Vault provides a safe and easy way to store client secrets, API keys, password, connection string and certificates, etc. Henceforward, it’s required to monitor the key vault’s following activities in security and audit perspective –
  • Change event logs of key vault
  • Change keys stored in key vault
  • Change policies in key vault, etc.


Scope Definition


Here, we talk about the Key Vault monitoring, which means fetching key’s change events using Event Grid and Azure Automation Account (e.g. Webhook, Runbook). Key Vault integration with Azure Event Grid (preview) allows to be notified when one of the secrets in the key Vault is about to expire, expired, or a new version available. 

In fact, the status change makes an HTTP POST to the endpoint and a Webhook triggers the Automation execution of a PowerShell script.


In addition, under the Events of Key Vault, we can configure Event Subscription with selection of an endpoint that would be previously implemented Webhook URL treated as Subscriber Endpoint.



Now any event concerned to Key Vault like new secrets or any changes would be captured by the Event Grid successfully. Under Metrics we can verify the event as well do a cross check the trigger status of Webhook.


Scope Implementation


Considering this activity as a demo, I’ll go with same test Key Vault, where each change event of keys or secrets can be captured by the Event Grid and Webhook. In parallel, we can trigger the alert mail by using the Logic App on top of Event Grid implementation and SendGrid account.

In brief, the following resources would be covered in this hands-on activity:
  • Key Vault
  • Event Grid
  • Webhook/Runbook (Automation Account)
  • Logic Apps
  • SendGrid

Prepare Runbook


Before proceeding, it’s required to create a PowerShell type of Runbook and publish the same using some snippet of code to make sure to receive POST request.



Prepare Webhook


Next, required to create a Webhook to trigger the newly above created Runbook. Here one thing needs to consider during the creation of Webhook, must copy the URL and save somewhere else because it can't view later onward.



Prepare Event Grid Subscription


Move inside the Key Vault and click the Event (Preview). It’s required to submit the pre-requisite details like Filter of Event Types and Endpoint Type etc.


Must go with all options selected under the Filter to Event Types and select Webhook for Endpoint Type. Here, in the new context pane, need to paste the same Webhook URL from that we copied during creation of Webhook into the Subscriber Endpoint field.


Prepare Logic Apps via Event Grid


By using above Event Grid implementation, we can capture the Key Vault change events and via Azure Logic App we can trigger a notification mail, whenever a key event occurred in Key Vault.

Next, move to Key Vault, select Events - Get Started and click Logic Apps.


Over Logic Apps Designer need to validate the connection and proceed further. Here, we can select all change events under Event Type Item to catch all changes simultaneously.


We can utilize a mail specific action, i.e. SendGrid and using its account and API details to trigger the email. It’s required to build an email template including dynamic content based on event data.



Validate the Event Grid 


In validation perspective, we can do some activity over the Key Vault, like creating a new key or secret, generate a new version or change policy, etc. Even we can set the expiry date bit early so service will catch the event.


Whenever any event occurs, the same used to capture by the Event Grid and can be seen under Metrics of the Event Grid.


In addition, we can validate using the Webhook, the "last triggered" time stamp should be within 60 seconds of the change of key or secret. It confirms that Event Grid made a POST to the Webhook with the event details of the status change in the Key Vault and that the Webhook was triggered.


Validate Logic Apps & Email Alert


Similar to the Event Grid and Webhook, the Logic App triggers the mail using SendGrid account with all event details.


Thus, we can monitor the Key vault by getting the alerts on top of raised events, for example the following – 
  • Keys about to expire
  • Keys expired
  • Keys have been created
  • Policy has been changed 

In the next article, we will walk through some other hands on activities. Keep visiting and talking ! ðŸ˜Š


Wednesday, July 29, 2020

Practice Set for Oracle Autonomous Database Cloud Specialist (1Z0-931)






Oracle Autonomous Database Cloud Specialist 



Due to COVID – 19 and lock-down the life has been changed a lot, and most of IT professionals shifted office to home. No doubt, the situation is a bit worse, but in this duration we can utilize this pandemic time interval by upskilling ourselves. Recently, Oracle had provided free learning as well free earning certificate(s). Even I utilized this platform and accomplished a couple of certificates, one of them is 1Z0-931-20, i.e. Oracle Autonomous Database Cloud 2020 Specialist

Certainly, the continuous engagement and hands on activities make anyone better. In fact, the practical approach is always best, but sometimes we lack the theoretical scenario and that play a hurdle to achieve the certification path.

In this series, I encountered similar types of issues once decided to pursue the 1Z0-931-20 course but succeed to accomplish the certificate. Considering a practice environment, I prepared a set of questionnaires precisely for the Oracle Autonomous Database exam.

Practice Set 


Go through the following specific practice set containing the precise question and try to opt the correct answer. I have included a couple of questions here since entire list (65 +) is not feasible to post over the blog. If you required then please drop your email id in the comment box, I'll try to share the practice sets.

1. What are two advantages of using Data Pump to migrate your Oracle Databases to Autonomous Database? (Choose two.)
  1. Data Pump can exclude migration of objects like indexes and materialized views that are not needed by Autonomous Database.
  2. Data Pump is platform independent - it can migrate Oracle Databases running on any platform.
  3. Data Pump is faster to migrate database than using RMAN.
  4. Data Pump creates the tablespaces used by your Autonomous Database.


[Answer – 1 & 2]

2. The default eight-day retention period for Autonomous Database performance data can be modified using which DBMS_WORKLOAD_REPOSITORY subprogram procedure?
  1. UPDATE_OBJECT_INFO
  2. MODIFY_SNAPSHOT_SETTINGS
  3. CREATE_BASELINE_TEMPLATE
  4. MODIFY_BASELINE_WINDOW_SIZE

[Answer – 2]

3. Which task is NOT automatically performed by the Oracle Autonomous Database?
  1. Backing up the database.
  2. Mask your sensitive data.
  3. Patching the database.
  4. Automatically optimize the workload.

[Answer – 2]

4. Which three statements are true about procedures in the DBMS_CLOUD package? (Choose three.)
  1. The DBMS_CLOUD.PUT_OBJECT procedure copies a file from Cloud Object Storage to the Autonomous Data Warehouse.
  2. The DBMS_CLOUD.CREATE_CREDENTIAL procedure stores Cloud Object Storage credentials in the Autonomous Data Warehouse database.
  3. The DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Data Warehouse.
  4. The DBMS_CLOUD.DELETE_FILE procedure removes the credentials file from the Autonomous Data Warehouse database.
  5. The DBMS_CLOUD.CREATE_EXTERNAL_TABLE procedure creates an external table on files in the cloud. You can run queries on external data from the Autonomous Data Warehouse.

[Answer – 2, 3 & 5]

5. Which of these database features is NOT part of the Autonomous Database?
  1. Online Indexing
  2. Flashback Database
  3. Real Application Clusters (RAC)
  4. Java in the Database

[Answer – 4]

6. Which two statements are true with regards to Oracle Data Sync? (Choose two.)
  1. Data Sync can connect to any jdbc compatible source like MongoDB, RedShift and Sybase.
  2. Data Sync can use a normal OCI (thick) client connection to connect to an Oracle database.
  3. Data Sync can load your data in parallel in order to speed up the loading process.
  4. Data Sync has default drivers available that supported loading data from DB2, Microsoft SQL Server, MySQL and Teradata.

[Answer – 1 & 3]

7. Which statement is false about Autonomous Database Oracle Client Credentials (Wallets)?
  1. In addition to the Oracle Client Credential Wallet, a user must have a username and password in order to connect to the Autonomous Database.
  2. The Oracle Client Credential file is downloaded as a ZIP file.
  3. The Wallet for the Autonomous Database is the same as the Transparent Data Encryption (TDE) wallet.
  4. You MUST have an Oracle Client Credential Wallet in order to connect to the Autonomous Database.

[Answer – 3]

8. What is the predefined role that exists in Autonomous Database that includes common privileges that are used by a Data Warehouse developer?
  1. ADBDEV
  2. ADMIN
  3. DWROLE
  4. ADWC

[Answer – 3]

9. Which two system privileges does a user need to create analytic views? (Choose two.)
  1. CREATE ANALYTIC MEASURE
  2. CREATE ANALYTIC LEVEL
  3. CREATE ANALYTIC HIERARCHY
  4. CREATE ANALYTIC VIEW
  5. CREATE ATTRIBUTE DIMENSION

[Answer – 4 & 5]

10. What are three methods to load data into the Autonomous Database? (Choose three.)
  1. Oracle GoldenGate
  2. Transportable Tablespace
  3. RMAN Restore
  4. Oracle Data Pump
  5. SQL*Loader

[Answer – 1, 4 & 5]

11. While Autonomous Transaction Processing and Autonomous Data Warehouse use the same Oracle database, which statement is true about the workloads?
  1. Autonomous Transaction Processing memory usage optimizes workloads for parallel joins and aggregations.
  2. Autonomous Data Warehouse workloads are optimized for mixed workloads.
  3. Autonomous Transaction Processing workloads are optimized for data warehouse, data mart, and data lake.
  4. Data that is bulk loaded, by default, uses the row format in Autonomous Transaction Processing where Autonomous Data Warehouse data format is columnar.

[Answer – 4]

12. When scaling OCPUs in Autonomous Database, which statement is true in regards to active transactions?
  1. Active transactions continue running unaffected.
  2. Active transactions are paused.
  3. Scaling cannot happen while there are active transactions in the database.
  4. Active transactions are terminated and rolled back.

[Answer – 1]

13. Which three statements are correct when the Autonomous Database is stopped? (Choose three.)
  1. A.User with DWROLE can still access the database.
  2. B.Tools are no longer able to connect to a stopped instance.
  3. C.CPU billing is halted based on full-hour cycles of usage.
  4. D.In-flight transactions and queries are stopped.

[Answer – 2, 3 & 4]

14. Which two are correct actions to take in order to Download the Autonomous Database Credentials? (Choose two.)
  1. Click on the Autonomous Data Warehouse in the menu, click a database name, then Choose DB Connection button, then Download the Wallet.
  2. Click on the Autonomous Data Warehouse section, pick a database, then Choose Actions, then Download the Wallet.
  3. Find the Service Console for your Autonomous Database, then pick Administration, then Download the Client Credentials (Wallet).
  4. Click on the Object Storage and find your Autonomous Bucket and Download the Wallet Credentials. 
  5. Click the Compute section of the menu, then choose Instance Configurations, then Download Wallet.

[Answer – 1 & 3]

15. How many pre-defined service names are configured in tnsnames.ora for a single Autonomous Transaction Processing database instance, and what are they called?
  1. Two. They are called ATP and ADW.
  2. None. There are no pre-defined service names in tnsnames.ora.
  3. Three. They are called high, medium and low.
  4. Five. They are called tpurgent, tp, high, medium and low.

[Answer – 4]

16. If you need to connect to Autonomous Data Warehouse (ADW) using Java Database Connectivity (JDBC) via an HTTP proxy, where do you set the proxy details?
  1. tnsnames.ora
  2. keystore.jks
  3. sqlnet.ora
  4. cwallet.sso
  5. ojdbc.properties

[Answer – 1]

17. Your customer receives information in various formats like .csv files from their suppliers. The business user would like to collect all of this information and store it in a ATP environment. The Oracle adviser recommends to use Oracle Data Sync for this.
Which statement is true regarding Oracle Data Sync?
  1. Data Sync can only load files into tables (insert-only), the customer has to write the additional code.
  2. Data Sync can not transform your data while loading it into the destination table.
  3. Data Sync can load a combination of data source, such as .csv, .xlsx and Oracle relational files.
  4. Data Sync can only load data from one source into one destination table.

[Answer – 3]

18. The 3rd party application that your customer wants to migrate to Autonomous Database (ADB) has some specific demands like tablespace names, usemames and init.ora parameters. The decision was made to adhere to the suggested migration method using an instant client and the datapump version that was suggested (and came with it).
Which statement is true about the migration of the application's database success?
  1. The migration can be technically a success but the 3-rd party vendor needs to support the result.
  2. The suggested datapump version will create an alias for non-standard tablespace names so the migration is successful.
  3. The tablespace names will result in a blocking error during datapump import because of ADB limitations.
  4. The migration can be a success, both technically and functional due to datapump enhancements.

[Answer – 3]

19. A customer wants to migrate to Autonomous Database (ADB) but only allows for a very small window of downtime. Golden Gate was advised to be used during the migration. For maximum reassurance of their end-users, the customer also would like to use Golden Gate as a fall-back scenario for the first 6 months after the migration. If customers complain, the on-premise data can be synchronised with the ADB Instance for a switch back.
Which statement about the migration using Golden Gate is correct?
  1. Migration to ADB is not possible using Golden Gate because the apply-process cannot be installed on ADB.
  2. Only the migration to ADB is possible from an on-premise installation of Golden Gate.
  3. Golden Gate on premise is not certified with ADB because Golden Gate Cloud Service exists for this.
  4. The fallback scenario is not possible using Golden Gate because the capture-process cannot be installed on ADB.
  5. The described scenario is correct, can be used for migration and fallback scenarios.

[Answer – 4]

20. Which statement is true regarding database client credentials file required to connect to your Autonomous Database?
  1. Place the credential files on a share drive that all users can use to connect to the database.
  2. The Transparent Data Encryption (TDE) wallet can be used for your client credentials to connect to your database.
  3. Store credential files in a secure location and share the files only with authorized users to prevent unauthorized access to the database.
  4. When you share the credential files with authorized users, mail the wallet password and the file in the same email.

[Answer – 3]

21. In which way can a SQL Developer help you test your data loading scenario to Autonomous Database (ADB)?
  1. In the TEST phase of the wizard, a subset of accepted records is displayed based on your definition.
  2. In the TEST phase of the wizard a list is generated containing the records that would be rejected during import.
  3. In the Column Definition Phase, the system cross-references with the file-contents and shows the conflicts with the definition.
  4. In the TEST phase, a temporary table will be populated with the records before inserting them in the destination table.

[Answer – 2]

22. Where can a user's public ssh key be added on the Oracle Cloud Infrastructure Console in order to execute API calls?
  1. On the Autonomous Database Console.
  2. SSH keys are not required in Oracle Cloud Infrastructure.
  3. SSH keys cannot be added from console. They have to be added using REST APIs only.
  4. Navigate to Identity, select Users panel on the console and select "Add Public Key".

[Answer – 4]

23. Which statement is true in regards to database links?
  1. You can call PL/SQL procedures and functions using a database link.
  2. Connect from Autonomous Database to remote database using a database link.
  3. Connect to Autonomous Database from remote database using a database link.
  4. Create a database link from one Autonomous Database to another Autonomous Database instance.

[Answer – 3]

24. How can an Autonomous Database resource be provisioned without logging into the Oracle Cloud Infrastructure console?
  1. Using Database Configuration Assistant (DBCA) on the database server.
  2. It cannot be done.
  3. Connecting to the Cloud Infrastructure Command console via SSH wallet.
  4. Using the Oracle Cloud Infrastructure Command Line interface tool or REST API calls.

[Answer – 4]

25. Which Autonomous Database Cloud service ignores hints in SQL Statements by default?
  1. Autonomous Transaction Processing.
  2. Autonomous Data Warehouse.
  3. Neither service ignores hints by default.
  4. Both services ignore hints by default.

[Answer – 2]

Best wishes for the exam !!😊

NOTE: I tried my best to answer correctly, but if you see any discrepancies in any answer then let me know, I’ll try to rectify the same. Along with this, as I stated previously that due to a long list, it’s not feasible to share all questions over here. If required, please raise your concern through the comment box and I’ll share the same *.


* Conditions Apply

Friday, May 31, 2019

Azure Cosmos DB and SQL API account - Hands on activity


Azure Cosmos DB

  

Azure Cosmos DB – brief Introduction 


In one of previous articles, we have talked about Azure Cosmos DB and its bits and pieces.  We see that Azure Cosmos DB is the globally distributed and fully managed database service for building world-scale applications. It leverages the Azure cloud infrastructure to support global-scale application workloads.

Azure Cosmos DB is formerly known as Document DB, a multi-model NoSQL database in an Azure cloud platform that offer to store and process massive amount of structured, semi-structured and unstructured data. It provides native support to various platforms to access your database like, MongoDB APIs, Cassandra, Azure Tables, Gremlin and SQL.

Here in this article, we will talk about SQL API and will do some hands on activity on top of Azure Cosmos DB SQL API account. SQL API is the ideal way if you build a non-relational document database and execute query like SQL syntax. In the context of some exercise, you can cover the following tasks herewith – 
  • Create an Azure Cosmos DB SQL API account
  • Create a document database and a collection (item)
  • Add data to the collection
  • Query to data by using Data Explorer 


Pre-requisites


Before taking place, we need some essential pre-requisites to accomplish this Azure Cosmos DB exercise.
  1. Azure subscription, if you don't have an account then sign up for a free Azure account - https://azure.microsoft.com/en-gb/free/
  2. Hands on SQL syntax, though not mandatory, but if know then easier to  write different queries.


STEP – 1: Create an Azure Cosmos DB account


Login to the Azure portal https://portal.azure.com/

In the Microsoft Azure portal, click the + Create a resource from the Hub and click the Databases from the Azure Marketplace. It will load all available database services under the Featured section, select the Azure Cosmos DB or can search from the search box.

Azure Portal


Promptly the Create Azure Cosmos DB Account blade will be loaded, the time you click the Azure Cosmos DB. Here some required details needed to submit to proceed like Project Details as well Instance Details under Basic tab.

Make sure the correct subscription is selected and then choose an existing Resource group or can create a new using Create new.

In my case going with earlier created resource group ‘demogroup

Create Azure Cosmos DB Account


Under Instance Details, enter valid details for the following properties – 
  • Account Name – Enter a unique name to identify the Azure Cosmos account.
  • API – Here, the API determines the type of account to create, select Core (SQL) to create a document database and query by using SQL syntax.
  • Apache Spark – Go with by default Disabled mode.
  • Location – Required selecting a geographic location or region to host the Azure Cosmos DB account.
  • Geo-Redundancy – Go with by default Disabled mode.
  • Multi-region Writes – Go with by default Disabled mode.


Azure Cosmos - Instance Details

Now, nearly all properties have filled up; continue by clicking the Review + Create button at the bottom. However, you can skip the Network and Tags sections. Within few moments, post validation success a final review page will be appeared that display the details about the Azure Cosmos DB you are about to create.
  
Azure Cosmos - Validation Success


If you find everything arranged, then click the Create button that will take a few minutes for your Azure Cosmos DB to be deployed.

Azure Cosmos - Deployment


Sooner, you will be notified once the Azure Cosmos DB is created successfully.

Azure Cosmos - Deployed


Congratulation, an Azure Cosmos DB provisioned!! 😊

STEP – 2: Add a Database and a Container


We knew that Azure Cosmos DB is a globally distributed multi-model database that supports the document, graph, and key-value data models. A collection is a container of JSON documents and associated JavaScript application logic, i.e. stored procedures, triggers and user-defined functions.

A collection maps to a container in Azure Cosmos DB, therefore, it is a billable entity, where the cost is determined by the provisioned throughput expressed in request units per second.

Move to Azure portal and click the All resources from the Hub, select the Azure Cosmos DB account page that you created in the previous steps.

Azure Cosmos - Overview Page

Since you did not create any container yet, time to create a database and container by using Data Explorer. Select Data Explorer from the left navigation from the Azure Cosmos DB account page, which provides the option to create a container.

Data Explorer


Proceed ahead by clicking the New Container, which will load Add Container page where the details required for the new container as listed below properties –
  • Database id – Enter a new database id.
  • Provision database throughput – Go with by default non-selected.
  • Container id – Enter a unique identifier for the container.
  • Partition key – For example, moving with /category as the partition key as we used in this task.
  • My partition key is larger than 100 bytes
  • Throughput – Go to the by default option.


Data Explorer


Once you submit all details, and then continue by clicking the OK button. More readily, you will get the database and the collection under Data Explorer, as you provided the name during provisioning.

Data Explorer - Container


STEP – 3: Add data to Database


In the previous section, you set up a database and a collection, time to add data to your new database by using Data Explorer.

You can see the new database, expand the cosmoscontainer database and later on expand the items collection. Sooner you can get different types of multiple options to create new items, etc. Click Items under the collection and then select the New Item from the top menu accordingly.

Data Explorer - Expand Items


A new item pane will be loaded the time you click the New Item, required to add the following JSON based structure to the item.

{
    "id": "100",
    "category": "personal",
    "name": "groceries",
    "description": "do not forget apples.",
    "isComplete": false
}

Data Explorer - Add Item


Now add the new item by clicking the Save button. Next, select the New Item again, create, and save another item with a unique id.

You can add any properties and values since the item can have any structure, since Azure Cosmos DB does not follow any schema.

I did a couples of new entries, like – 

{
    "id": "101",
    "category": "personal",
    "name": "fashion",
    "description": "casual dress",
    "isComplete": false
}
{
    "id": "102",
    "category": "personal",
    "name": "medicine",
    "description": "bp medicines",
    "isComplete": false
}
{
    "id": "103",
    "category": "personal",
    "name": "groceries",
    "description": "green salad",
    "isComplete": false
}

You can see all entries based on id and category wise, under the Items section.

Data Explorer - Added Items

STEP – 4: Query to Data


In fact, Azure Cosmos DB does not enforce any schema for data, so you are, free to add different types of properties and values in the items. Now time to query by using Data Explorer to retrieve and filter your data.

You can see a by default SELECT query exists at the top of the Items tab in Data Explored, the query retrieves and display all items in the collection in ID order.

Data Explorer - Saved Items


If required, you can change the query, select Edit Filter, replace the default query with below listed one, and then select Apply Filter.

WHERE c.name= 'groceries'

Data Explorer - Edit Query


It will display the filtered items as soon as you click the Apply Filter button.

Data Explorer - Apply Filter


Looks like, it is very similar to SQL like queries, if you are hands on with SQL syntax then can write different query in the query predicate box easily. Along with this, you can use Data Explorer to create stored procedures, User Defined Functions, and Triggers etc.

Here, we walked through the Azure Cosmos DB and came to know that how cool it is to work with your Azure Cosmos DB data. In next article, we will create a .NET web app and will validate how much feasible to work with .NET program like to update your Azure Cosmos DB data by using a .NET web app.

Keep visiting for further posts.

Saturday, May 25, 2019

Deploy files from local Repository to GitHub via Git


Git and GitHub


Git and GitHub – an Introduction


In previous article we have been gabbing about the Version Control Systems and its different types as well fundamental approaches of Git and GitHub. Version Control Systems are a kind of software tools that help development teams to manage changes to source code over time. It keeps track of every modification to the code in a special kind of database.

Git is one of the top Version Control System, a tool to manage your source code history. Along with this, GitHub is a web-based hosting service for Git repositories. Better to visit the preceding post to get an overview about Git and GitHub, because they work on top of the Repository and Working Copy though not the same thing.

At this point in this post we will walk through some hands on activities like – 
  • Connect an Ubuntu VM using PuTTY client
  • Install Git and set up your GitHub account
  • Execute some most popular commands in Git
  • Push all the files from the local repository to GitHub


Meanwhile, if required, then you can visit some previous articles to know a bit more about the following topics – 


Pre-requisites


Before proceeding further, need some pre-requisites to go ahead to deploy all the source code files from the local repository to GitHub, like – 
  1. Azure subscription, if you don't have an account then sign up for a free Azure account - https://azure.microsoft.com/en-gb/free/
  2. A running Ubuntu Linux VM
  3. PuTTY client to be used as the SSH client
  4. GitHub account, if you don’t have an account then sign up - https://github.com/  


STEP – 1: Validate the existence of an Ubuntu VM


It is essential to exist an Ubuntu Linux Azure Virtual Machine (VM) to accomplish this demo task, login to the Azure portal https://portal.azure.com/.

On the left Hub menu, click All resources and select the existing Ubuntu virtual machine, verify the VM is either running or stopped, if it is in stop mode, i.e. deallocated then start the same.

Ubuntu Linux VM

STEP – 2: Fetch the connection details of Ubuntu VM


Next, required to connect the VM, you can go with either SSH key or PuTTY client depends on the configuration and setup done with Ubuntu VM. 

I am moving ahead with PuTTY client, click the Connect button from the menu bar to launch the connection details.

Ubuntu VM Overview


Here you can see a new blade as Connect to virtual machine appeared, copy the account details which exist under the Login using the VM local account, in my case - ssh demoadmin@40.117.153.69.

STEP – 3: Connect the VM using PuTTY client


Since the Ubuntu Linux VM is configured (password-based) in such a way to connect using the PuTTY client, so open up PuTTY, and in the Session page, submit the host name into the Host Name box, the same we copied earlier.

For example, in my case, it was - ssh demoadmin@40.117.153.69, but need to submit only demoadmin@40.117.153.69, exclude the ssh prefix and then, under Connection type, select SSH and click Open.

PuTTY Client


Once the SSH session has been established, promptly, it will ask password for the connecting server, enter the administrator password you specified during provisioning the Ubuntu VM.

SSH Session


Post authorization, you will be connected with the Ubuntu Linux 18.04.1 LTS Virtual Machine (VM), happy to go ahead.

PuTTY Connected


STEP – 4: Install Git


Mostly Git used to be installed, better to check the version of Git by executing the following command in the terminal.

Git --version
  
Git Version


It will display the current installed version, but if not installed, then you can install Git by executing the following command.

sudo apt-get install git

Git Installation


Quickly the package will be starting to install, though in my case the newer version was already installed.

STEP – 5: Set up GitHub account


We came to know that GitHub is a web-based hosting service for version control using Git. In addition, it offers different plans for public and private repositories; here we will demonstrate the hands on activity on public repository, i.e. free version.

If you do not have an account, then navigate to https://github.com/.

Next, click on Sign up for GitHub once you provided a user name, email id and password.

GitHub Sign up


Sooner you will get a new page segregated by three steps, in the first step; you will have to click the Create an account that will lead some verification. In the second step, Choose your subscription, select Free, and proceed further by clicking the Continue button.  

Welcome to GitHub


In the next step, you can share basic information about yourself and preferences or you can skip this step. Meanwhile, you will receive an email to verify your account. It is essential to verify your email address, once confirmed; your GitHub account is set up successfully. 

Do not forget to note down the user name and email id that required in next steps.

Congratulations, the GitHub account set up done!! 😊

STEP – 6: Login from Git local to connect remote GitHub


Move to terminal window and execute the following commands by replacing your username as well email, which you have been copied in previous steps. In fact, you need to provide the registered email address and user name with your GitHub account.

git config --global user.email your_email_id
git config --global user.username your_username

Login from Git local to connect remote GitHub

STEP – 7: Create some multiple demo files and content in each file


In the meantime, we required some code file that considering different languages for the hands on activity, henceforth you can create some demo files with diverse extensions. 

First, create a folder where you can store all the files in one place, execute the following commands.

mkdir demoproject
cd demoproject

Sample files

Now required to create different files, you can use the touch command. Touch command is the easiest way to create new, empty multiple files, and execute the following command.

touch index.html texts.txt c_program.c java_program.java index.js styles.css
ls -l

Empty files


Now you can use any text editors available in Linux, but I will prefer the vi editor. To open and work the vi in editor mode, follow the subsequent steps – 
  • Open the editor - vi filename.extension
  • Activate the insert mode – click the I key
  • Save and execute the vi editor – Click Esc, then : (shift) wq 


For example, required to add the below code in c_program.c file.

#include<stdio.h>
int main(){
printf(“Hello! I am C-Program. Thank you!”);
return 0;
}

In the first step, open the vi editor with the c_program.c file, execute the following command.

vi c_program.c

Vi editor

It will launch the vi editor, change the mode to INSERT and write the above sample code. Once done, click the Esc key and follow with :wq to save and return to the terminal.

Vi editor - insert mode


You can do verify the contents by using the cat command that displays contents of the file, execute in following way.

cat c_program.c

Cat command


Similarly, you can add some contents in all specified files as well can skip some files that will not contain any codes or statements.

STEP – 8: Initialize Git


In fact, all the files are to be pushed to GitHub, essential to initialize a .git folder inside the directory by executing the following commands.

git init
git add .
git commit . -m "I am pushing all the sample files to my GitHub"
git status

Note, required to follow the process step-by-step confirmation of each command executed.

Initialize Git


Although, you can visit the official site to get more information about the Git commands - https://git-scm.com/doc.

STEP – 9: Create a repository in GitHub account


Next, time to create a repository that is a storage location where you can push code files as well pull and installed on a computer. 

Login the GitHub account - https://github.com.

Move to the home page, here click on Create a repository link that appears in the left most corner as you can see the below snapshot.

GitHub Home Page


It will launch the Create a new repository page, where you need to provide details of the following properties – 
  • Owner/Repository name – Provide a short but meaningful name for your repository.
  • Description – Though it is an option but can submit the description about the repository.
  • Public/Private – Go with the by default selection with Public option.
  • Initialize this repository with a README – It will place a README file in your repository.


Create a new repository


Once you have submitted the details, proceed ahead and click the Create repository button. Here you will be navigated automatically inside the directory you have created.

Repository


STEP – 10: Cloning with SSH URLs


Since a repository has been created, now using the URLs you can use to clone the project into your computer. Here you will go through SSH URL that provide access to a Git repository via SSH, a secure protocol.

Navigate inside the directory, and click the Clone or download button that will provide two URL based clone option i.e. HTTPS and SSH. Click the Use SSH link.

Clone with SSH


If you get messages about the SSH key something like “You don't have any public SSH keys in your GitHub account….”, then time to create an SSH key and adding it to GitHub. 

Move to the terminal and generate an RSA key for the registered email id by executing the following command.

ssh-keygen -t rsa -C registered_email_id

SSK Key generation


RSA key has been generated, by using the cat command you can display on the terminal, and execute the following command.

cat /home/demoadmin/.ssh/id_rsa.pub

Display Key


Copy the entire Key that is required to add to the GitHub account, move to the same directory in GitHub homepage and  click the add a new public key link that comes at Clone with SSH option. Alternatively, you can go via the Personal Settings - > SSH & GPG Keys.
  
Clone with SSH


Promptly Add new key page will be loaded the time you click the link, now provide the key title and the key under Key box accordingly.

Add SSH Key


Save the SSH key by clicking the Add SSH key, and you can see the acknowledgement of list that associated with the GitHub account.

SSH Keys


Once the RSA Key has been added, you can get the SSH URL, move back to the Clone section and copy the SSH URL as displayed in below snapshot.

Clone with SSH


STEP – 11: Add Remote and Push Cloning with SSH URLs


Next, required to add this new remote by using git remote add command on the terminal, in the same directory where the repository is stored. In fact, the git command required the remote URL followed by origin keyword, so use the same-copied the SSH URL for the remote herewith.

Copy the git remote add origin <SSH_URL_of_Your_GitHub_Repository> and execute it in the terminal, something like that.

git remote add origin git@github.com:rajendrxxxxxxxxxx/demoproject.git

Remote added


Remote origin added, now time to execute a push command that is used to send the commits from your local branch in the local Git repository to the remote server. Execute the following command to deploy all code files to the GitHub repository.

git push -u origin master

If you get some failed or rejected error using the above command, then go with a forced update by executing the following command.

git push origin master --force

Push files

Navigate the repository under the GitHub account and reload the page; you will get the newly added code files that were pushed from the local repository by using Git.

Files deployed to GitHub


Congratulations, the files deployed to GitHub via Git successfully!! 😊

I trust you came to know a bit more about the Git & GitHub approach and enjoyed the hands on activity tasks. In parallel, we have covered some precise Git commands during the assignment pushing files to GitHub. 

Undoubtedly, in further posts, we will see some more diverse topics and exercises, keep visiting the blog.