<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Debug School: Pavani</title>
    <description>The latest articles on Debug School by Pavani (@pavanip2011_561).</description>
    <link>https://www.debug.school/pavanip2011_561</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://www.debug.school/feed/pavanip2011_561"/>
    <language>en</language>
    <item>
      <title>BASH SCRIPT AND ITS COMMANDS</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Tue, 13 Dec 2022 17:07:40 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/bash-script-and-its-commands-164p</link>
      <guid>https://www.debug.school/pavanip2011_561/bash-script-and-its-commands-164p</guid>
      <description>&lt;p&gt;BASH SCRIPT: &lt;br&gt;
The Linux Bash is also known as 'Bourne-again Shell. 'A Bash script is a plain text file. This file contains different commands for step-by-step execution. These commands can be written directly into the command line but from a reusability perceptive it is useful to store all of the inter-related commands for a specific task in a single file. We can use that file for executing the set of commands one or more times as per our requirements. &lt;br&gt;
Example:&lt;br&gt;
 #!/bin/bash/&lt;br&gt;
 # it is a function&lt;br&gt;
 my function () {&lt;br&gt;
hello world&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  function call
&lt;/h1&gt;

&lt;p&gt;my function&lt;/p&gt;

&lt;p&gt;Applications of Bash scripts: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Manipulating files&lt;/li&gt;
&lt;li&gt;Executing routine tasks like Backup operation&lt;/li&gt;
&lt;li&gt;Automation
Advantages of Bash Scripts:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;It is simple.&lt;/li&gt;
&lt;li&gt;It helps to avoid doing repetitive tasks&lt;/li&gt;
&lt;li&gt;Easy to use&lt;/li&gt;
&lt;li&gt;Frequently performed tasks can be automated&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A sequence of commands can be run as a single command.&lt;br&gt;
Disadvantages of Bash Scripts:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any mistake while writing can be costly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A new process launched for almost every shell command executed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Slow execution speed &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compatibility problems between different platforms.&lt;br&gt;
Some commands of bash:&lt;br&gt;
1) Basename: This command is used to strip directory and suffix from the file names.&lt;br&gt;
syntax: $ basename filename.txt .txt&lt;br&gt;
2) cal: displays the calender&lt;br&gt;
syntax: $ cal -y year&lt;br&gt;
3) df: disk file system.  It will show the disk space usage in a tabular form. The df command is useful for discovering the available free space on a system or file system.&lt;br&gt;
syntax: $ df&lt;br&gt;
-&amp;gt; df -h: It displays the disk space in human readable form&lt;br&gt;
-&amp;gt; df -T: It displays the file system type.&lt;br&gt;
4) diff: diff stands for difference. This command is used to display the differences in the files by comparing the files line by line. This command tells us which lines in one file have is to be changed to make the two files identical.&lt;br&gt;
-&amp;gt; diff uses certain special symbols and instructions that are required to make two files identical.&lt;br&gt;
a- add&lt;br&gt;
c- change&lt;br&gt;
d- delete &lt;br&gt;
syntax: $ diff file1.txt file2.txt&lt;br&gt;
-&amp;gt; -c : To view the differences in the context mode use -c option.&lt;br&gt;
-&amp;gt; -u: To view the differences in the unified mode use -u option.&lt;br&gt;
5)dir: This command is used to list the contents of a directory.&lt;br&gt;
syntax: $dir&lt;br&gt;
-&amp;gt; dir -a:  displays all the hidden files(starting with &lt;code&gt;.&lt;/code&gt;) along with two files denoted by &lt;code&gt;.&lt;/code&gt; and &lt;code&gt;..&lt;/code&gt; which signals for current and previous directory respectively.&lt;br&gt;
-&amp;gt; dir -A: It is similar to -a option except that it does not display files that signals the current directory and previous directory.&lt;br&gt;
-&amp;gt; dir -l --author: displays the author of all the files.&lt;br&gt;
6) dmesg: driver message or display message used to examine the kernel ring buffer and print the message buffer of kernel. The output of this command contains the messages produced by the device drivers.&lt;br&gt;
syntax: dmesg&lt;br&gt;
-&amp;gt; dmesg | grep word-&lt;br&gt;
-&amp;gt; dmesg -t- -t specifies with timestamps.&lt;br&gt;
7) du- du command, short for disk usage, is used to estimate file space usage. The du command can be used to track the files and directories which are consuming excessive amount of space on hard disk drive.&lt;br&gt;
-&amp;gt; du -h: display the disk size in human readable form.&lt;br&gt;
-&amp;gt; du -a: printing all files including directories.&lt;br&gt;
-&amp;gt; du -c: print total size&lt;br&gt;
-&amp;gt; du -s: get the summary of file system&lt;br&gt;
8) egrep: egrep is a pattern searching command which belongs to the family of grep functions.&lt;br&gt;
syntax: egrep pattern file name&lt;br&gt;
-&amp;gt; -c: Used to counts and prints the number of lines that matched the pattern and not the lines. &lt;br&gt;
-&amp;gt; -v:  It prints the lines that does not match with the pattern.&lt;br&gt;
-&amp;gt; -o: prints the matched part of the line but not the entire line.&lt;br&gt;
9) eval- eval is a built-in Linux command which is used to execute arguments as a shell command. It combines arguments into a single string and uses it as an input to the shell and execute the commands. &lt;br&gt;
syntax: eval arg&lt;br&gt;
example: eval c="clear"&lt;br&gt;
10) expand: expand which allows you to convert tabs into spaces in a file and when no file is specified it reads from standard input. &lt;br&gt;
syntax: expand filename&lt;br&gt;
11) expr: It evaluates the given expression and displays the output&lt;br&gt;
syntax: expr expression&lt;br&gt;
12) fold: fold command in Linux wraps each line in an input file to fit a specified width and prints it to the standard output. Default width is 80&lt;br&gt;
syntax: fold filename&lt;br&gt;
-&amp;gt; fold -w[n] filename: we can limit width of the width by number of the columns. we can change the default width of 80.&lt;br&gt;
-&amp;gt; fold -b[n] filename: this option of fold command is used to limit the width of the output by the number of bytes rather than the number of columns.&lt;br&gt;
-&amp;gt; -s: This option is used to break the lines on spaces so that words are not broken. If a segment of the line contains a blank character within the first width column positions, break the line after the last such blank character meeting the width constraints.&lt;br&gt;
syntax: fold -w[n] -s filename&lt;br&gt;
13) free:  free command which displays the total amount of free space available along with the amount of memory used and swap memory in the system, and also the buffers used by the kernel.&lt;br&gt;
syntax: $free&lt;br&gt;
14) gawk: gawk command in Linux is used for pattern scanning and processing language.&lt;br&gt;
$gawk -F'{print $1} filename&lt;br&gt;
15) id: this command used to find the user identity, group identity.&lt;br&gt;
syntax: $ id &lt;br&gt;
16) join: join command is used to join the two files based on a key field present in both the files. The input file can be separated by white space or any delimiter.&lt;br&gt;
syntax: join file1.txt file2.txt&lt;br&gt;
17) look: The look command in Linux shows the lines beginning with a given string. This command also uses binary search if the file is sorted. If file is not specified, the file /usr/share/dict/words is used. &lt;br&gt;
syntax: $look string filename&lt;br&gt;
18) ls: this command lists contents in a directory.&lt;br&gt;
syntax: ls -l&lt;br&gt;
19) more: more command is used to view the text files in the command prompt, displaying one screen at a time in case the file is large. The more command also allows the user do scroll up and down through the page.&lt;br&gt;
syntax: more filename&lt;br&gt;
20) nl: This command is used to count the numbering the lines syntax: $ nl filename&lt;br&gt;
21) paste:  It is used to join files horizontally by outputting lines consisting of lines from each file specified, separated by tab as delimiter, to the standard output.&lt;br&gt;
syntax: paste file1 file2&lt;br&gt;
22) ps- process status  ps command is used to list the currently running processes and their PIDs along with some other information depends on different options&lt;br&gt;
syntax: ps -a&lt;br&gt;
23) rm: remove files&lt;br&gt;
syntax: $ rm filename&lt;br&gt;
-&amp;gt; $ rm -f filename: remove files forcefully without prompting for confirmation.&lt;br&gt;
24) rmdir: remove empty directories from the file system&lt;br&gt;
syntax: $rmdir directory&lt;br&gt;
25) SCP (secure copy): The SCP command or secure copy allows secure transferring of files in between the local host and the remote host or between two remote hosts. It uses the same authentication and security as it is used in the Secure Shell (SSH) protocol. SCP is known for its simplicity, security and pre-installed availability.&lt;br&gt;
syntax: $ scp&lt;br&gt;
26) seq:  is used to generate numbers from FIRST to LAST in steps of INCREMENT.&lt;br&gt;
syntax: seq first increment last&lt;br&gt;
seq 0 3 20&lt;br&gt;
27) set: It is used to set or unset specific flags and settings inside the shell environment. It can be used to change or display the shell attributes and parameters.&lt;br&gt;
-&amp;gt; -a- use to mark variables that are created or modified or created for export.&lt;br&gt;
-&amp;gt; -b- use to notify the termination of the job.&lt;br&gt;
-&amp;gt; -e- use to exit when the command exits with a non-zero status.&lt;br&gt;
-&amp;gt; -f- it disables the file name generation known as &lt;br&gt;
 globbing&lt;br&gt;
-&amp;gt; -h- It saves the location of the command where it got looked.&lt;br&gt;
-&amp;gt;-k- It places all assignment arguments in the environment variable of a command.&lt;br&gt;
example: set -x&lt;br&gt;
echo apple orange mango&lt;br&gt;
27) sleep: This command helps delaying the execution&lt;br&gt;
  syntax: sleep 10s&lt;br&gt;
s- second&lt;br&gt;
m- minutes&lt;br&gt;
h- hours&lt;br&gt;
28) sort: SORT command is used to sort a file, arranging the records in a particular order. SORT command sorts the contents of a text file, line by line.&lt;br&gt;
 syntax: sort filename.txt&lt;br&gt;
-&amp;gt; sort -r filename.txt: sorts the file in reverse order.&lt;br&gt;
29) sudo: super user do. It will run the command with elevated privileges. This is equal to run as administrator command in windows.&lt;br&gt;
syntax: sudo -l&lt;br&gt;
30) sum: sum command in Linux is used to find checksum and count the blocks in a file. Basically, this command is used to show the checksum and block count for each specified file&lt;br&gt;
syntax: sum -r filename&lt;br&gt;
31) time: this command is used to print a summary of real-time, user CPU time and system CPU time spent by executing a command when it terminates. ‘real-time is the time elapsed wall clock time taken by a command to get executed, while ‘user ‘and ‘systems are the number of CPU seconds that command uses in user and kernel mode respectively. &lt;br&gt;
syntax: $time filename&lt;br&gt;
32) touch: This command will update the timestamps of input file if it exists and will create an empty file if the input file does not exist.&lt;br&gt;
syntax: touch -c filename&lt;br&gt;
33) top: this command shows the summary information of the system and the list of processes which are currently processed&lt;br&gt;
syntax: $ top&lt;br&gt;
34) tr: This command is used to translate or delete the characters. It translates upper case to lower case, deleting some characters.&lt;br&gt;
syntax: cat filename| tr [a-z] [A-Z]&lt;br&gt;
-&amp;gt; -s option is used to squeeze the repetitive occurrences.&lt;br&gt;
-&amp;gt; -d option is used to delete the characters.&lt;br&gt;
35) tty: It displays the information about the terminal. It prints the file name of the terminal connected to the standard input&lt;br&gt;
syntax: $tty&lt;br&gt;
36) type: This command is used to describe how arguments are converted when they are used as commands. It displays whether the command is builtin or binary file&lt;br&gt;
syntax: $ type command name&lt;br&gt;
37) ulimit: It is used to see, set, limit the usage of the resource usage of the current user.&lt;br&gt;
-&amp;gt; ulimit -a: to check the ulimit value &lt;br&gt;
-&amp;gt; ulimit -u: to display the maximum users process limit for logged in user.&lt;br&gt;
-&amp;gt; ulimit -F: to show maximum size of a user can have.&lt;br&gt;
38) uname: print system information.&lt;br&gt;
syntax: $uname&lt;br&gt;
39) wc- This word count (wc) is used to print word, line, byte counts.&lt;br&gt;
syntax: $ wc filename&lt;br&gt;
40) .- run a command script in the current shell.&lt;br&gt;
41) !!- run the last command again&lt;br&gt;
42) whoami- Print the current username and id&lt;br&gt;
syntax: $ whoami&lt;br&gt;
43) who- print all users currently logged in&lt;br&gt;
syntax: $ who&lt;br&gt;
44) vmstat: report virtual memory stastics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>SHELL SCRIPT</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Tue, 13 Dec 2022 06:07:45 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/shell-script-35ga</link>
      <guid>https://www.debug.school/pavanip2011_561/shell-script-35ga</guid>
      <description>&lt;p&gt;&lt;strong&gt;POWER SHELL&lt;/strong&gt;: A shell is special user program which provide an interface to user to use operating system services. Shell accept human readable commands from user and convert them into something which kernel can understand. It is a command language interpreter that execute commands read from input devices such as keyboards or from files. The shell gets started when the user logs in or start the terminal.&lt;br&gt;
It is classified into 2 types&lt;br&gt;
    1)command line shell: Shell can be accessed by user using a command line interface. A special program called Terminal in linux/macOS or Command Prompt in Windows OS is provided to type in the human readable commands such as “cat”, “ls” etc. and then it is being execute. &lt;br&gt;
    2) graphical shell: Graphical shells provide means for manipulating programs based on graphical user interface (GUI), by allowing for operations such as opening, closing, moving and resizing windows, as well as switching focus between windows. &lt;/p&gt;

&lt;p&gt;There are several shells are available for Linux systems like –&lt;/p&gt;

&lt;p&gt;BASH (Bourne Again Shell)– It is most widely used shell in Linux systems. It is used as default login shell in Linux systems and in macOS. It can also be installed on Windows OS.&lt;br&gt;
CSH (C Shell)– The C shell syntax and usage are very similar to the C programming language.&lt;br&gt;
KSH (Korn Shell)– The Korn Shell also was the base for the &lt;br&gt;
POSIX (portable operating system interface) Shell standard specifications etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Each shell does the same job but understand different commands and provide different built in functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why do we need shell scripts?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1)To avoid repetitive work and automation&lt;br&gt;
   2)System admins use shell scripting for routine backups&lt;br&gt;
   3)System monitoring&lt;br&gt;
   4)Adding new functionality to the shell.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Advantages of shell scripts&lt;/p&gt;

&lt;p&gt;1)The command and syntax are exactly the same as those &lt;br&gt;
   directly entered in command line, so programmer do not need &lt;br&gt;
   to switch to entirely different syntax&lt;br&gt;
 2)Writing shell scripts are much quicker&lt;br&gt;
 3)Quick start&lt;br&gt;
 4)Interactive debugging etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-Disadvantages of shell scripts&lt;/p&gt;

&lt;p&gt;1)Prone to costly errors, a single mistake can change the &lt;br&gt;
    command which might be harmful&lt;br&gt;
   2)Slow execution speed&lt;br&gt;
   3)Design flaws within the language syntax or implementation&lt;br&gt;
     Not well suited for large and complex task&lt;br&gt;
     Provide minimal data structure unlike other scripting &lt;br&gt;
     languages. etc &lt;br&gt;
&lt;strong&gt;Shell commands&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;1)cat: It is generally used to concatenate the files. It gives the output on the standard input. It creates the file with content.&lt;br&gt;
     syntax:  $ cat employee name&lt;/p&gt;

&lt;p&gt;2)more: It is a filter for paging through text one screenful at a time.&lt;br&gt;
  syntax:     $ more employee name&lt;/p&gt;

&lt;p&gt;3) less: It is used to viewing the files instead of opening the file. Similar to more command but it allows backward as well as forward movement.&lt;br&gt;
  syntax: $ less employee.txt&lt;/p&gt;

&lt;p&gt;4) head: It is used to print the first N lines of a file. It accepts N as input and the default value of N is 10.&lt;br&gt;
        syntax: $ head employee. txt&lt;br&gt;
 We can give N value so that it prints the number of lines given.&lt;/p&gt;

&lt;p&gt;5) tail: Used to print the last N-1 lines of a file. It accepts N as input and the default value of N is 10.&lt;br&gt;
  syntax:    $ tail N-1 employee.txt&lt;/p&gt;

&lt;p&gt;File and directory manipulation commands:&lt;/p&gt;

&lt;p&gt;6) mkdir:  mkdir function used to create directory in the current directory or using mkdir-p, it creates directory at the specified path.&lt;/p&gt;

&lt;p&gt;7) cp: This command will copy the files and directories from the source path to the destination path. It can copy a file/directory with the new name to the destination path. It accepts the source file/directory and destination file/directory.&lt;/p&gt;

&lt;p&gt;8) mv : Used to move the files or directories. This command’s working is almost similar to cp command but it deletes a copy of the file or directory from the source path.&lt;/p&gt;

&lt;p&gt;9) rm : Used to remove files or directories.&lt;/p&gt;

&lt;p&gt;10) touch: It is used to create a file without any content. The file created using touch command is empty. This command can be used when the user doesn’t have data to store at the time of file creation.&lt;br&gt;
        $ touch employeename&lt;br&gt;
touch -a: This command is used to change access time only. To change or update the last access or modification times of a file touch -a command is used.&lt;/p&gt;

&lt;p&gt;touch -c : This command is used to check whether a file is created or not. If not created then don’t create it. This command avoids creating files&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;              touch -c employee.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;touch -c-d: This is used to update access and modification time.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        touch -c-d employee.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;touch -m : This is used to change the modification time only. It only updates last modification time.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;             touch -m employeename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;11) grep: The grep filter searches a file for a particular pattern of characters, and displays all lines that contain that pattern. &lt;br&gt;
     -c: This prints only a count of the lines that match a &lt;br&gt;
         pattern&lt;br&gt;
     -h: Display the matched lines, but do not display the &lt;br&gt;
          filenames.&lt;br&gt;
     -i: Ignores, case for matching&lt;br&gt;
     -l: Displays list of a filenames only.&lt;br&gt;
     -n: Display the matched lines and their line numbers.&lt;br&gt;
     -v: This prints out all the lines that do not matches the &lt;br&gt;
         pattern&lt;br&gt;
     -e exp: Specifies expression with this option. Can use &lt;br&gt;
              multiple times.&lt;br&gt;
     -f file: Takes patterns from file, one per line.&lt;br&gt;
      -E: Treats pattern as an extended regular expression (ERE)&lt;br&gt;
      -w: Match whole word&lt;br&gt;
      -o: Print only the matched parts of a matching line,&lt;br&gt;
            with each such part on a separate output line.&lt;/p&gt;

&lt;p&gt;12) sort: SORT command is used to sort a file, arranging the records in a particular order. By default, the sort command sorts file assuming the contents are ASCII. Using options in the sort command can also be used to sort numerically. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  -r Option: Sorting In Reverse Order: You can perform a reverse-order sort using the -r flag. the -r flag is an option of the sort command which sorts the input file in reverse order i.e. descending order by default. 

 -n Option: To sort a file numerically used –n option. -n option is also predefined in Unix as the above options are. This option is used to sort the file with numeric data present inside. 

  -nr option: To sort a file with numeric data in reverse order we can use the combination of two options as stated below. 

  -k Option: Unix provides the feature of sorting a table on the basis of any column number by using -k option. 

  -c option: This option is used to check if the file given is already sorted or not &amp;amp; checks if a file is already sorted pass the -c option to sort. This will write to standard output if there are lines that are out of order. The sort of tool can be used to understand if this file is sorted, and which lines are out of order 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;13) Wc: It is used to find out number of lines, word count, byte and characters count in the files specified in the file arguments.&lt;br&gt;
By default, it displays four-columnar output. First column shows number of lines present in a file specified, second column shows number of words present in the file, third column shows number of characters present in file and fourth column itself is the file name which are given as argument.&lt;br&gt;
           syntax: $ wc file name&lt;br&gt;
     -c: This option displays count of bytes present in a file. With this option it displays two-columnar output, 1st column shows number of bytes present in a file and 2nd is the file name.&lt;br&gt;
     -m: Using -m option ‘wc’ command displays count of characters from a file.&lt;br&gt;
     -L: The ‘wc’ command allow an argument -L, it can be used to print out the length of longest (number of characters) line in a file. &lt;/p&gt;

&lt;p&gt;14) cut: It can be used to cut parts of a line by byte position, character and field. Basically, the cut command slices a line and extracts the text. It is necessary to specify option with command otherwise it gives error. If more than one file name is provided, then data from each file is not precedes by its file name.&lt;br&gt;
            syntax: cut filename&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   -b(byte): To extract the specific bytes, you need to follow -b option with the list of byte numbers separated by comma.
   -c (column): To cut by character use the -c option. This selects the characters given to the -c option. 
   -f (field): -c option is useful for fixed-length lines. Most unix files doesn’t have fixed-length lines
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;15) echo: Echo is a command in shell scripting which is a built-in command used to display a variable, result of an expression, string, number, text, any useful information in the programming, etc. can be displayed using echo command in shell scripting. It is available in most of the programming languages and most commonly used in shell scripting where bash and c shells are mostly used. This command can also be used to display the commands or arguments sent to a shell program also. &lt;br&gt;
         syntax echo string name&lt;br&gt;
    a) To display a text or string on the console&lt;br&gt;
             echo hello world&lt;br&gt;
    b) To display a variable value on the console&lt;br&gt;
              x=5 echo $x&lt;br&gt;
    c) To remove spaces from a given string and display on the &lt;br&gt;
       console&lt;br&gt;
                echo -e " this/ is/ my/ world"&lt;br&gt;
     d) To print all files or folders in the current directory&lt;br&gt;
                 echo *&lt;br&gt;
16) Variables: The variables which are a type of parameter are generally managed by the users or the system. We can take an example of $var which is a variable parameter. The system sets $var, but this variable parameter can be written by the user.&lt;/p&gt;

&lt;p&gt;17) special parameters: The special parameters are read-only which are maintained by the shell. The special parameters are with a predefined meaning. Below are the various special parameters:&lt;br&gt;
         a) $#: Its parameter represents the total number of &lt;br&gt;
                 arguments passed to the script.&lt;br&gt;
         b) $0: This parameter represents the script name. &lt;br&gt;
         c) $n: This parameter represents the arguments &lt;br&gt;
                corresponding to a script when a script is invoked &lt;br&gt;
                such $1 $2…etc. $1, $2…etc are called positional &lt;br&gt;
                parameters.&lt;br&gt;
         d) $&lt;em&gt;: This parameter describes the positional parameters &lt;br&gt;
                to be distinct by space. For example, if there are &lt;br&gt;
                two arguments passed to the script, this parameter &lt;br&gt;
                will describe them as $1 $2.&lt;br&gt;
         e) $$: This parameter represents the process ID of a &lt;br&gt;
                shell in which the execution is taking place.&lt;br&gt;
         f) $!: This parameter represents the process number of &lt;br&gt;
                the background that was executed last.&lt;br&gt;
         g) $@: This parameter is similar to the parameter $&lt;/em&gt;.&lt;br&gt;
         h) $?: This parameter represents exit status of the last &lt;br&gt;
                command that was executed. Here 0 represents &lt;br&gt;
                success and 1 represents failure.&lt;br&gt;
        i) $_:  This parameter represents the command which is &lt;br&gt;
                being executed previously.&lt;br&gt;
        j) $-:  This parameter will print the current options &lt;br&gt;
                flags where the set command can be used to modify &lt;br&gt;
                the options flag.&lt;br&gt;
      syntax: $ cat program.sh&lt;br&gt;
             echo "The File Name is: $0"&lt;br&gt;
             echo "The First argument is: $1"&lt;br&gt;
             echo "The Second argument is: $2"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          $ sh program.sh ab cd
          The File Name: program.sh
          The First argument is: ab
          The Second argument is: cd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;18) pwd: pwd‘ stands for ‘Print Working Directory ‘. As the name states, command ‘pwd‘ prints the current working directory or simply the directory user is, at present. It prints the current directory name with the complete path starting from root (/). This command is built in shell command and is available on most of the shell – bash, Bourne shell, ksh, zsh, etc.&lt;br&gt;
         syntax: pwd filename&lt;/p&gt;

&lt;p&gt;19) Ifconfig: It is used to know the kernel-based interface for networks. This command is mainly used at the boot time to know and set up interfaces as and when necessary. Otherwise, ifconfig command only comes into a role when some system tuning, or some debugging is needed.&lt;/p&gt;

&lt;p&gt;20) Netstat: This is one major command which tops the list of shell scripting commands. Netstat is used to display the network-related information like those of routing tables, network connections, masquerade connections, interface statistics, multicast memberships, etc. the suffix –a is used to list all network ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/gDHid6R_0FOT_BOyvHT4oqdchkOdTmBXorM2bNU6r8s/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvOTE5bXY4/Y3ZiNGlwbjA5amFz/MGEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/gDHid6R_0FOT_BOyvHT4oqdchkOdTmBXorM2bNU6r8s/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvOTE5bXY4/Y3ZiNGlwbjA5amFz/MGEucG5n" alt="Image description" width="880" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;21) nslookup:  his shell scripting command is mainly used by infra management and techOps/DevOps team as they are required to deal with a deep level of networking. It is a network utility-based command which displays the information of internet servers. It queries Domain Name Server and thereby fetches the result related to server name information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/OYbsYRupveOS3_w1VO2HeE1-FgTBvuiya3zqvwtIw2A/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvb3I3Mjdk/ZHQyeWk2M2R2Nnpy/a2QucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/OYbsYRupveOS3_w1VO2HeE1-FgTBvuiya3zqvwtIw2A/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvb3I3Mjdk/ZHQyeWk2M2R2Nnpy/a2QucG5n" alt="Image description" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;22) Uptime: This is a command which is used to keep a track of any malicious or any unusual activity that might be affecting your system. Uptime is used to know what actually happened when the server was left unattended.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/5UyEu8PinKMvy4MRF0UpjFKfDuyj80sv9NRO4a7anQg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMveHBxOGpz/ajY0YjlsdWU0dTZ5/amwucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/5UyEu8PinKMvy4MRF0UpjFKfDuyj80sv9NRO4a7anQg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMveHBxOGpz/ajY0YjlsdWU0dTZ5/amwucG5n" alt="Image description" width="806" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;23) Wall: This is one of the most essential shell scripting commands, especially for an administrator as this can be used to broadcast a message to n number of people, to all those who have their mesg permission set to yes. The message is then provided as the argument to a wall or it is also sent as standard input for a wall.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/1m7cvfSbtRlxS2BwRBgPn3GI29O1HaALXoventa2HwU/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZHI1enRv/d3Zud3dyaWo0YXQ4/MG8ucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/1m7cvfSbtRlxS2BwRBgPn3GI29O1HaALXoventa2HwU/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZHI1enRv/d3Zud3dyaWo0YXQ4/MG8ucG5n" alt="Image description" width="771" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;24) Mesg: This command lets you take the control that whether the people can make use of the “write” by providing an option of y|n.&lt;/p&gt;

&lt;p&gt;25) w: This command is though just a one-letter command can make wonders possible as it is a combination of who and uptime commands which are given in a sequence immediately after the other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/u6IAkKIHV_Ps5OA3tWeagryxiWXdJYzXkVGvdkTN6o4/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvdGI4N213/dWtnNXo4YjJsM2k3/aXEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/u6IAkKIHV_Ps5OA3tWeagryxiWXdJYzXkVGvdkTN6o4/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvdGI4N213/dWtnNXo4YjJsM2k3/aXEucG5n" alt="Image description" width="632" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;26) top: It is used to display all the processes of a CPU. This command is best known as it refreshes itself and continuously displays all the CPU processes which are up and running at one point of time until and unless an interrupt command is given.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/nMo9L8pc7NcRbXzJyU9G32rI49-j8UHAMKqyu5mEDmY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYWJxenY2/NmkwYnJqcW53emZ2/b3IucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/nMo9L8pc7NcRbXzJyU9G32rI49-j8UHAMKqyu5mEDmY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYWJxenY2/NmkwYnJqcW53emZ2/b3IucG5n" alt="Image description" width="802" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;27) Arithmetic operator&lt;br&gt;
At the very first type of operator, we have an arithmetic operator. This is assumed to be an extension of the operators we use in mathematics as well. We are already well aware of these operators, but just for the sake of listing the different operators we have:&lt;br&gt;
      a) Addition operator (+): This operator is for the addition &lt;br&gt;
          of &lt;br&gt;
                  2 operands.&lt;br&gt;
       Subtraction operator (-): This operator is for the &lt;br&gt;
                                 subtraction of 2 operands&lt;br&gt;
       Multiplication operator (*): This operator is for &lt;br&gt;
                                  multiplication of 2 operands&lt;br&gt;
      The division operator (/): This operator is for the division &lt;br&gt;
                                 of 2 operands. It gives only the &lt;br&gt;
                                 quotient.&lt;br&gt;
      The modulus operator (%): This operator is for estimating &lt;br&gt;
                               the remainder when one operand is &lt;br&gt;
                               divided by the other.&lt;br&gt;
     Increment Operator (++): This operator is for incrementing the &lt;br&gt;
                            operand’s value by 1.&lt;br&gt;
Decrement Operator (–): This operator is for decrementing the operand’s value by 1.&lt;/p&gt;

&lt;p&gt;28) Relational operator&lt;br&gt;
As the name suggests these operators work on determining the relation between 2 operands. In the case of the relational operator, the output is either true or false, irrespective of the type of operator we would be using. Some of these relational operators are:&lt;/p&gt;

&lt;p&gt;a) ‘==’ Operator: This operator evaluates the two operands by equating them and process the output as true if they are equal and false if they are not. These operators may be used for integers as well as strings. For strings, one would use it with [[ for pattern matching.&lt;br&gt;
b) ‘!=’ Operator: This operator evaluates the two operands by checking the inequality and process the output as true if they are equal and false if they are not. Again, for this operator, one can use it for integer as well as on strings. Essentially for string, this operator would return true if the strings don’t match.&lt;br&gt;
c) ‘&amp;lt; ‘Operator: This operator checks for the value of the first operand’s to be less than the second one and return true if that’s the case and for vice versa, it would return false. This is not so widely used in strings!&lt;br&gt;
d) ‘&amp;lt;=’ Operator: This operator checks for the value of the first operand’s to be less than or equal to the second one and return true if that’s the case and for vice versa, it would return false. This is not present for strings.&lt;br&gt;
e) ‘&amp;gt;’ Operator: This operator checks for the value of the first operand’s to be greater than the second one and returns true if that’s the case and false for vice versa. Not quite widely used for strings.&lt;br&gt;
f) ‘&amp;gt;=’ Operator: This operator checks for the value of the first operand’s to be greater than or equal to the second one and return true if that’s the case and false for vice versa. This option is not available for strings.&lt;/p&gt;

&lt;p&gt;29) Logical Operator: This set of operators are for analyzing 2 or more set of conditions and return true based on some conditions for each of them.&lt;/p&gt;

&lt;p&gt;a) Logical AND operator: Only in case of both the set of conditions be true, this operator will process true, else false.&lt;br&gt;
Logical OR operator: In any case of either of the conditions in any one of the sides be true, this operator will output out TRUE as well.&lt;br&gt;
b) NOT operator: This operator reverses the output coming out of the condition and shows that as the output.&lt;/p&gt;

&lt;p&gt;30) Bitwise Operators&lt;br&gt;
These operators are pretty much similar to the logical operators except the difference that these operators work on bit patterns. One essential thing to keep in mind is the &amp;lt;&amp;lt; or &amp;gt;&amp;gt; are not for less than or greater than, but something explained below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AND (&amp;amp;)&lt;/li&gt;
&lt;li&gt;OR (|)&lt;/li&gt;
&lt;li&gt;XOR (^)&lt;/li&gt;
&lt;li&gt;Compliment&lt;/li&gt;
&lt;li&gt;Shift left (&amp;lt;&amp;lt;)&lt;/li&gt;
&lt;li&gt;Shift right (&amp;gt;&amp;gt;)
31) File operators: The final piece of the puzzle is the file operator, essentially used for testing the file properties:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;-b: Checks if the file is a block special file or not.&lt;/li&gt;
&lt;li&gt;-c: Checks if the file is a character special file or not.&lt;/li&gt;
&lt;li&gt;-d: Checks if the name of the directory exists or not.&lt;/li&gt;
&lt;li&gt;-e: Checks if the file exists or not&lt;/li&gt;
&lt;li&gt;-r: Checks if the file has “read” access or not.&lt;/li&gt;
&lt;li&gt;-w: Checks if the file has “write” access or not.&lt;/li&gt;
&lt;li&gt;-x: Checks if the file has “execute” access or not.&lt;/li&gt;
&lt;li&gt;-s: Checks for the file size and returns if the size is greater than 0.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;32) if else: "If else” is a conditional or control statement in the programming language. This plays a different action or computation depending on whether the condition is true or false.&lt;/p&gt;

&lt;p&gt;“if-else” is dealing with two-part of the executions. If the “if-else “condition is “true” then the first group of statements will execute. If the condition is “false” then the second group of statements will execute.&lt;br&gt;
      syntax:&lt;br&gt;
if [ condition ]&lt;br&gt;
then&lt;br&gt;
Statement 1 (first Group)&lt;br&gt;
else&lt;br&gt;
Statement 2 (second Group)&lt;br&gt;
fi&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/sdZa8ZUOofyrrsbkNujv8El8CiD4TuV9i31LO0F4dCw/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbGJhMTEx/emtidTloMXNiaXBn/YXcucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/sdZa8ZUOofyrrsbkNujv8El8CiD4TuV9i31LO0F4dCw/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbGJhMTEx/emtidTloMXNiaXBn/YXcucG5n" alt="Image description" width="762" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;33) while loop:  While loop provides a way to execute the same program by checking a condition when the condition satisfies the program will execute otherwise it won’t execute and the process repeats until the condition fails to meet.&lt;br&gt;
syntax: &lt;br&gt;
while [condition]&lt;br&gt;
do&lt;br&gt;
command1&lt;br&gt;
command2&lt;br&gt;
done&lt;/p&gt;

&lt;p&gt;34) Local variables: These types of variables are present only within the running instance of the shell. In case there are child processes that are started by the shell script, the local variables might not be accessible to the child processes.&lt;/p&gt;

&lt;p&gt;35) Environment variables: These types of variables are the ones that are accessible to any child process the shell script has run, unlike the local variables.&lt;/p&gt;

&lt;p&gt;36) Shell variables: In certain cases, the shell might be required to set some variables in order to smooth the execution of the script and these variables are known as shell variables.&lt;br&gt;
   a) allexport: This variable is used for marking those variables and functions which are to be exported to the environment. Until and unless we specify this variable, all variables are local and when this is turned on all variables and functions will be transported to the subshell.&lt;br&gt;
  b) braceexpand: This variable performs brace expansion. This is a method used for generating strings at the command line. Mainly used for reusing a file path which is quite long to be written again in bash.&lt;/p&gt;

&lt;p&gt;c) emacs: Using this variable one can use emacs styled editing interface in the command line.&lt;br&gt;
  d) errext: This variable is to allow shell scripts to immediately exit if there is a non-zero status out of the script.&lt;br&gt;
  e) errtrace: Traps are cool techniques for implementing error handling when using bash. In this any errors which can be trapped is inherited by shell functions or substitutions in command.&lt;br&gt;
  f) functrace: Like the previous variable, this variable helps traps on DEBUG and RETURN to be inherited by shell functions or substitutions in command.&lt;br&gt;
  g) hashall: This variable helps in remembering the location of commands as and when they are looked up for execution.&lt;br&gt;
  h) histexpand: This enables shell scripts to use! style history substitution. One would have stumbled across the usage of ! is a sentence and get an error. This is because of the! style history substitution. If this annoys one, one can surely set off this variable.&lt;br&gt;
 i) ignoreeof: If you want to use Ctrl + D in windows to keep your session or script running, and not leave the shell, we would need to use this variable. In short, when the shell reads EOF, it will not exit. For example, if you set IGNOREEOF=18, one would have to press Ctrl+D 18 times to leave the shell.&lt;br&gt;
j) history: This is to enable command history. You would have noticed these while pressing up button you can see the previous commands you have used.&lt;br&gt;
k) monitor: This variable enables the shell script to have job control during execution.&lt;br&gt;
l) noclobber: This variable enables bash to not overwrite an existing file using &amp;gt;, &amp;gt;&amp;amp; operators.&lt;br&gt;
m) noexec: Using this variable one would just read the commands and not execute them. This is widely used for doing syntax checks in the code.&lt;br&gt;
n) noglob: This variable option is used for disabling a file name generation or in other words, pathname expansion.&lt;br&gt;
o) nounset: In the case of parameter expansion, any unset parameters are treated as an error when this variable is used in the set command.&lt;br&gt;
p) onecmd: This variable as the name suggests executes one command and then exits.&lt;br&gt;
q) physical: When this variable is used, symbolic links don’t work. For example, if one needs to change the directory, they can’t use the cd as a physical directory structure would be used.&lt;br&gt;
r) pipefail: When this variable is used, the pipeline is returned with a value which is the last command to exit with a non-zero status. This is to understand the last point of error in the code.&lt;br&gt;
s) posix: When used, the bash behavior is changed making default operation different from POSIX standard.&lt;br&gt;
t) privileged:  This allows security by running the shell script without inheriting it from the environment and hence the environment variables are not accessible.&lt;br&gt;
u)verbose:: This allows the input lines of the shell to be printed as they are read.&lt;br&gt;
v) vi: This is to start a vi styled editing interface.&lt;br&gt;
w) xtrace: This allows all the commands and their arguments to be printed as they get executed. Widely used for tracing back to an error in case many shell processes are run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/6Q1yuVzZC6PZjm8iGniGTwADGpnVIZ7X5r6k6EHkwxA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbGR1cjEx/b3pkdXVmejcydjls/cWsucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/6Q1yuVzZC6PZjm8iGniGTwADGpnVIZ7X5r6k6EHkwxA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbGR1cjEx/b3pkdXVmejcydjls/cWsucG5n" alt="Image description" width="795" height="610"&gt;&lt;/a&gt;&lt;br&gt;
In the output you would see that all the command post point 1 gets printed as they are executed.&lt;/p&gt;

&lt;p&gt;37) ls: ls is the command which is responsible for listing the folders and files present in a particular directory. This shell scripting command is often appended with other commands such as –ltr or –lrt, etc. depending upon the need.&lt;br&gt;
 syntax: ls &lt;/p&gt;

&lt;p&gt;38) Piping (|): This is another very basic command that is used to fetch the output received from one command straight away into another. This symbol called pipe can most often be seen along with the grepping command. At some places, this piping can also be said to be chaining.&lt;br&gt;
 syntax: cat filename |grep string&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/BRMwoYly5SiqqnGN4j0O5viCrocBA0DZyy731dMIu7M/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaDBhM3Zn/OWgwc2ExczZweTJt/NTAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/BRMwoYly5SiqqnGN4j0O5viCrocBA0DZyy731dMIu7M/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaDBhM3Zn/OWgwc2ExczZweTJt/NTAucG5n" alt="Image description" width="466" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;39) Dig: This is another Intermediate command which is used to query the Domain name servers and provide information about the host addresses, nameservers, mail exchanges, etc. related information. It is mostly used to query a single given host.&lt;/p&gt;

&lt;p&gt;40) Rename: this command is used to rename a file name&lt;/p&gt;

&lt;p&gt;41) chown: Different users in the operating system have ownership and permission to ensure that the files are secure and put restrictions on who can modify the contents of the files. &lt;br&gt;
  Each user has some properties associated with them, such as a user ID and a home directory. We can add users into a group to make the process of managing users easier.&lt;br&gt;
  A group can have zero or more users. A specified user can be associated with a “default group”. It can also be a member of other groups on the system as well.&lt;br&gt;
  Ownership and Permissions: To protect and secure files and directory we use permissions to control what a user can do with a file or directory. It uses three types of permissions:&lt;br&gt;&lt;br&gt;
Read: This permission allows the user to read files and in directories, it lets the user read directories and subdirectories stores in it.&lt;br&gt;
Write: This permission allows a user to modify and delete a file. Also it allows a user to modify its contents (create, delete and rename files in it) for the directories. Unless the execute permission is not given to directories changes does do affect them.&lt;br&gt;
Execute: This permission on a file allows it to get executed. For example, if we have a file named php.sh so unless we don’t give it execute permission it won’t run.&lt;br&gt;
Types of file Permissions:&lt;br&gt;&lt;br&gt;
User: This type of file permission affects the owner of the file.&lt;br&gt;
Group: This type of file permission affect the group which owns the file. Instead of the group permissions, the user permissions will apply if the owner user is in this group.&lt;br&gt;
Other: This type of file permission affects all other users on the system.&lt;br&gt;
syntax:&lt;br&gt;
chown [OPTION]… [OWNER] [: [GROUP]] FILE…&lt;br&gt;
chown [OPTION]… –reference=RFILE FILE…&lt;br&gt;
Example: To change owner of the file: &lt;br&gt;
chown owner_name file_name&lt;/p&gt;

&lt;p&gt;42) chgrp: chgrp command is used to change the group ownership of a file or directory. All files belong to an owner and a group. You can set the owner by using “chown” command, and the group by the “chgrp” command.&lt;/p&gt;

&lt;p&gt;Syntax:&lt;/p&gt;

&lt;p&gt;chgrp [OPTION]… GROUP FILE…&lt;br&gt;
chgrp [OPTION]… –reference=RFILE FILE…&lt;/p&gt;

&lt;p&gt;43) chmod: the chmod command is used to change the access mode of a file.&lt;/p&gt;

&lt;p&gt;Syntax :chmod [reference][operator][mode] file... &lt;/p&gt;

&lt;p&gt;The references are used to distinguish the users to whom the permissions apply i.e. they are list of letters that specifies whom to give permissions. &lt;/p&gt;

&lt;p&gt;Reference   Class     Description&lt;br&gt;
u          owner      file's owner&lt;/p&gt;

&lt;p&gt;g          group      users who are members of&lt;br&gt;
                      the file's group&lt;/p&gt;

&lt;p&gt;o          others     users who are neither the&lt;br&gt;
                      file's owner nor members of &lt;br&gt;
                      the file's group&lt;/p&gt;

&lt;p&gt;a          all       All three of the above, same as ugo&lt;/p&gt;

&lt;p&gt;The operator is used to specify how the modes of a file should be adjusted. The following operators are accepted:&lt;/p&gt;

&lt;p&gt;Operator Description&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Adds the specified modes to the specified classes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Removes the specified modes from the specified classes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;= The modes specified are to be made the exact modes for the specified classes&lt;/p&gt;

&lt;p&gt;The modes indicate which permissions are to be granted or removed from the specified classes. There are three basic modes which correspond to the basic permissions:&lt;/p&gt;

&lt;p&gt;r       Permission to read the file.&lt;br&gt;
w       Permission to write (or delete) the file.&lt;br&gt;
x       Permission to execute the file, or, in&lt;br&gt;
        the case of a directory, search it.&lt;/p&gt;

&lt;p&gt;44) IFS: IFS stands for Internal Field Separator. It is an environment variable that defines a field separator. By default, space, tab, and newline are considered as field separators&lt;/p&gt;

&lt;p&gt;45) env: env is used to either print environment variables. It is also used to run a utility or command in a custom environment.&lt;br&gt;
 syntax: env [OPTION]... [-][NAME=VALUE]... [COMMAND [ARG]...]&lt;br&gt;
  a)  Without any argument: print out a list of all environment variables.&lt;br&gt;
b) u or –unset: remove variable from the environment&lt;br&gt;
     $ env -u variable_name&lt;/p&gt;

&lt;p&gt;46) kill: This command is a built-in command which is used to terminate processes manually. kill command sends a signal to a process which terminates the process. If the user doesn’t specify any signal which is to be sent along with kill command then default TERM signal is sent that terminates the process.&lt;br&gt;
       a) kill -l: To display all the available signals&lt;br&gt;
                  Syntax: $kill -l&lt;br&gt;
       b) ill pid : To show how to use a PID with the kill command.&lt;br&gt;
            Syntax: $kill pid&lt;br&gt;
      c) kill -s: To show how to send signal to pro&lt;br&gt;
               Syntax: kill {-signal | -s signal} pid&lt;br&gt;
       d)  kill -L:This command is used to list available signals &lt;br&gt;
                     in a table format.&lt;br&gt;
                Syntax: kill {-l | --list[=signal] | -L | --table}  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>EBS vs EFS vs S3</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Fri, 02 Dec 2022 17:51:49 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/ebs-vs-efs-vs-s3-3lb1</link>
      <guid>https://www.debug.school/pavanip2011_561/ebs-vs-efs-vs-s3-3lb1</guid>
      <description>&lt;p&gt;&lt;strong&gt;AMAZON STORAGE SERVICES&lt;/strong&gt;: &lt;br&gt;
AWS provides wide range storage services that can be used accordingly to your project and use case. These are virtual storage services on cloud that allow to store data in a scalable and highly available manner and removes the hassle of buying and &lt;br&gt;
maintaining physical equipment for data storage. These storage services are divided basing on the storge types i.e file storage, block storage and object storage.&lt;/p&gt;

&lt;p&gt;Amazon Elastic Block Store (EBS): &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/0JR5pZkjaIUy9a608-pstgJC_p3tf4SLyASdH8YmYbQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMDh4MTBq/eHd3dnQ5a2J5YmZt/anEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/0JR5pZkjaIUy9a608-pstgJC_p3tf4SLyASdH8YmYbQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMDh4MTBq/eHd3dnQ5a2J5YmZt/anEucG5n" alt="Image description" width="398" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EBS (Amazon Elastic Block Store) is a block type storage provided by AWS. It is similar to hard drives on our physical machines. EBS is primarily used by the AWS EC2 instances for persistent data storage which means that even if EC2 instances are shut down, the data on EBS volume is not lost.&lt;br&gt;
EBS volumes can be dynamically attached, detached and scaled with EC2 instances. &lt;br&gt;
Types of EBS volumes:&lt;br&gt;
There are different types of EBS :&lt;br&gt;
       1)SSD backed volumes &lt;br&gt;
       2)HDD backed volumes &lt;br&gt;
&lt;a href="https://www.debug.school/images/ftwjc3SXSnV5WGyp7hcJ8EdsoofJrnN9f1H5OZTnBcg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZm9ybjY4/N3FhYXVwY2Y5bDVv/djEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/ftwjc3SXSnV5WGyp7hcJ8EdsoofJrnN9f1H5OZTnBcg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZm9ybjY4/N3FhYXVwY2Y5bDVv/djEucG5n" alt="Image description" width="880" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Features of EBS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Scalability: EBS volume sizes and features can be scaled as per the needs of the system. This can be done in two ways:&lt;br&gt;
a) Take a snapshot of the volume and create a new volume using &lt;br&gt;
   the Snapshot with new updated features.&lt;br&gt;
b) Updating the existing EBS volume from the console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backup: Users can create snapshots of EBS volumes that act as backups. Snapshot can be created manually at any point in time or can be scheduled. Snapshots are stored on AWS S3 and are charged according to the S3 storage charges. Snapshots are incremental in nature. New volumes across regions can be created from snapshots.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Encryption: Encryption can be a basic requirement when it comes to storage. This can be due to the government of regulatory compliance. EBS offers an AWS managed encryption feature.&lt;br&gt;
Users can enable encryption when creating EBS volumes bu clicking on a checkbox.&lt;br&gt;
-&amp;gt; Encryption Keys are managed by the Key Management Service (KMS) provided by AWS.&lt;br&gt;
-&amp;gt; Encrypted volumes can only be attached to selected instance types.&lt;br&gt;
-&amp;gt; Encryption uses the AES-256 algorithm.&lt;br&gt;
-&amp;gt; Snapshots from encrypted volumes are encrypted and similarly, volumes created from snapshots are encrypted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Charges: AWS charges users for the storage you hold or used. For example, if you use 1 GB storage in a 5 GB volume, you’d still be charged for a 5 GB EBS volume. EBS charges vary from region to region. EBS Volumes are independent of the EC2 they are attached to. The data in an EBS volume will remain unchanged even if the instance is rebooted or terminated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Relational and NoSQL databases can also be deployed and scaled with EBS. Various databases like MySQL, Oracle, MSSQL, Cassandra, MongoDB, etc. are supported.&lt;/p&gt;

&lt;p&gt;Drawbacks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EBS is not recommended as temporary storage.&lt;/li&gt;
&lt;li&gt;They cannot be used as a multi-instance accessed storage as they cannot be shared between instances.&lt;/li&gt;
&lt;li&gt;The durability offered by services like AWS S3 and AWS EFS is greater.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/KYF47ZIDxlqxFZ15msP2bt4rpG2OhJdqusRRwJQSISg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbjhvamxv/aTQ0d21rbmRkNTY4/aTAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/KYF47ZIDxlqxFZ15msP2bt4rpG2OhJdqusRRwJQSISg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbjhvamxv/aTQ0d21rbmRkNTY4/aTAucG5n" alt="Image description" width="320" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Single EBS volume can only be attached to one EC2 instance at a time. However, one EC2 can have more than one EBS volumes attached to it.&lt;/p&gt;

&lt;p&gt;Amazon Simple Storage Service (s3): &lt;br&gt;
&lt;a href="https://www.debug.school/images/PtL95DBxR8uTiXoK2tyj13MJCiYVIGaxup4iygCuh_g/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZG04cXU3/Z2JiOGwyMGphMHU0/MjIucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/PtL95DBxR8uTiXoK2tyj13MJCiYVIGaxup4iygCuh_g/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZG04cXU3/Z2JiOGwyMGphMHU0/MjIucG5n" alt="Image description" width="395" height="249"&gt;&lt;/a&gt;&lt;br&gt;
S3 (Amazon Simple Storage Service) is an object type store where users can upload and store objects into it. The objects are stored in a container which is referred to as a bucket. We can put objects into a bucket or get the objects from a bucket. An object can be a file of any type like a text file, a video, an image, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/ix4u06mbP2jHDjqKIqQ1Vh9hYtiQR6j15RsV8QPeBRg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcjl5aWZp/a2FkbGFqY2VkNnE1/MDIucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/ix4u06mbP2jHDjqKIqQ1Vh9hYtiQR6j15RsV8QPeBRg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcjl5aWZp/a2FkbGFqY2VkNnE1/MDIucG5n" alt="Image description" width="616" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Features of AWS S3:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Durability: AWS claims Amazon S3 to have a 99.999999999% of durability (11 9’s). This means the possibility of losing your data stored on S3 is one in a billion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Availability: AWS ensures that the up-time of AWS S3 is 99.99% for standard access.Note that availability is related to being able to access data and durability is related to losing data altogether.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Server-Side-Encryption (SSE): AWS S3 supports three types of SSE models:&lt;br&gt;
   SSE-S3: AWS S3 manages encryption keys.&lt;br&gt;
   SSE-C: The customer manages encryption keys.&lt;br&gt;
   SSE-KMS: The AWS Key Management Service (KMS) manages the &lt;br&gt;
            encryption keys.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;File Size support: AWS S3 can hold files of size ranging from 0 bytes to 5 terabytes. A 5TB limit on file size should not be a blocker for most of the applications in the world.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infinite storage space: Theoretically AWS S3 is supposed to have infinite storage space. This makes S3 infinitely scalable for all kinds of use cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pay as you use: The users are charged according to the S3 storage they hold.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS-S3 is region-specific.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use cases for S3: AWS S3 can be used by people with all kinds of use cases like mobile/web applications, big data, machine learning and many more.&lt;/p&gt;

&lt;p&gt;Amazon Elastic File System (EFS):&lt;br&gt;
&lt;a href="https://www.debug.school/images/QDdPcdrGayFSJ8a3R13PIvCIU5fWODPiYEnU9nMeNrc/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZTlqaG9t/bGVlcnh3b2djZjA5/YWgucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/QDdPcdrGayFSJ8a3R13PIvCIU5fWODPiYEnU9nMeNrc/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZTlqaG9t/bGVlcnh3b2djZjA5/YWgucG5n" alt="Image description" width="166" height="196"&gt;&lt;/a&gt;&lt;br&gt;
 EFS (Amazon Elastic File System) is a file-based storage service which is somewhat similar to the NAS (Network Attached Storage). EFS is a file-level, fully managed, storage provided by AWS that can be accessed by multiple EC2 instances concurrently. Just like the AWS EBS, EFS is specially designed for high throughput and low latency applications.&lt;br&gt;
Features of AWS EFS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Storage capacity: EFS provides an infinite amount of storage capacity. This capacity grows and shrinks as required by the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fully Managed: EFS takes the overhead of creating, managing, and maintaining file servers and storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi EC-2 Connectivity: EFS can be shared between any number of EC-2 instances by using mount targets.&lt;br&gt;
Note-: A mount target is an Access point for AWS EFS that is further attached to EC2 instances, allowing then access to the EFS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Availability: AWS EFS is region specific., however can be present in multiple availability zones in a single region.&lt;br&gt;
 -&amp;gt; EC-2 instances across different availability zones can &lt;br&gt;
    connect to EFS in that zone for a quicker access&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EFS Life Cycle Management: Lifecycle management moved files between storage classes. Users can select a retention period parameter (in number of days). Any file in standard storage which is not accessed for this time period is moved to Infrequently accessed class for cost-saving.&lt;br&gt;
-&amp;gt;Note that the retention period of the file in standard storage resets each time the file is accessed&lt;br&gt;
-&amp;gt;Files once accessed in the IA EFS class are them moved to Standard storage.&lt;br&gt;
-&amp;gt; Note that file metadata and files under 128KB cannot be transferred to the IA storage class.&lt;br&gt;
-&amp;gt; Life Cycle management can be turned on and off as deemed fit by the users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Durability: Multi availability zone presence accounts for the high durability of the Elastic File System.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Transfer: Data can be transferred from on- premises to the EFS in the cloud using AWS Data Sync Service. Data Sync can also be used to transfer data between multiple EFS across regions.\&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use cases for EFS: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Multiple server architectures: In AWS only EFS provides a shared file system. So all the applications that require multiple servers to share one single file system have to use EFS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Big Data Analytics: Virtually infinite capacity and extremely high throughput makes EFS highly suitable for storing files for Big data analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reliable data file storage: EBS data is stored redundantly in a single Availability Zone however EFS data is stored redundantly in multiple Availability Zones. Making it more robust and reliable than EBS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Media Processing: High capacity and high throughput make EFS highly favorable for processing big media files.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/WRK39NbHl0UagOuucTJmqslPABeJK6JKCGc4FnSzIW8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNnhhNGQ0/aWc3bXMxd2c0YWQ1/ZjUucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/WRK39NbHl0UagOuucTJmqslPABeJK6JKCGc4FnSzIW8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNnhhNGQ0/aWc3bXMxd2c0YWQ1/ZjUucG5n" alt="Image description" width="196" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EBS vs EFS vs S3: &lt;br&gt;
The difference between EBS, EFS AND S3 are as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/g6rtJbO0j9h2RotNuxf1oHITTj8-LelKe6SoJMi8mMY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbm5rcGJz/emNuNzNycjhvd2Vr/Y28ucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/g6rtJbO0j9h2RotNuxf1oHITTj8-LelKe6SoJMi8mMY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbm5rcGJz/emNuNzNycjhvd2Vr/Y28ucG5n" alt="Image description" width="880" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Network firewall</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Fri, 02 Dec 2022 16:44:02 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/network-firewall-1p32</link>
      <guid>https://www.debug.school/pavanip2011_561/network-firewall-1p32</guid>
      <description>&lt;p&gt;FIREWALL:&lt;br&gt;
A firewall is a network security device, either hardware or software-based, which monitors all incoming and outgoing traffic and based on a defined set of security rules it accepts, rejects or drops that specific traffic.&lt;br&gt;
 -&amp;gt; Accept: allow the traffic&lt;br&gt;
 -&amp;gt; Reject: block the traffic but reply with an “unreachable &lt;br&gt;
            error”&lt;br&gt;
 -&amp;gt; Drop: block the traffic with no reply&lt;/p&gt;

&lt;p&gt;A firewall establishes a barrier between secured internal networks and outside untrusted network, such as the Internet.&lt;/p&gt;

&lt;p&gt;Network Firewall: &lt;br&gt;
&lt;a href="https://www.debug.school/images/cZcm6KKCo7jbYpJU1L3aUIPo5EhY2mUdIr155tx-G80/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNjgwamV4/aDQ5d3NyZzM3OGdp/ZWYucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/cZcm6KKCo7jbYpJU1L3aUIPo5EhY2mUdIr155tx-G80/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNjgwamV4/aDQ5d3NyZzM3OGdp/ZWYucG5n" alt="Image description" width="467" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;: It is a managed service that helps deploy network protections for Amazon VPCs. It Provides fine-grained network traffic control that allows you to restrict outbound requests to prevent malicious activity from spreading. Import previously created rules in common opensource rule formats and enable integrations with managed intelligence feeds from AWS partners. With AWS Firewall Manager, you can create policies based on AWS Network Firewall rules and then apply those policies centrally across your VPCs and accounts.&lt;br&gt;
Features: &lt;br&gt;
 -&amp;gt; Automatically scales firewall capacity up or down based on the traffic load. &lt;br&gt;
 -&amp;gt; Supports inbound and outbound web filtering for unencrypted web traffic. The intrusion prevention system matches network traffic patterns to known threat signatures based on attributes. &lt;br&gt;
 -&amp;gt;Centrally deploy and manage security policies across AWS Organizations apps, VPCs, and accounts.&lt;br&gt;
 -&amp;gt; AWS Network Firewall has a highly flexible rules engine.&lt;br&gt;
 -&amp;gt; AWS Network Firewall supports thousands of rules, and the rules can be based on domain, port, protocol, IP addresses, and pattern matching.&lt;/p&gt;

&lt;p&gt;Concepts:&lt;br&gt;
1) Firewall: A traffic filtering logic for VPC subnets. The firewall configuration provides the parameters for the Availability Zones and subnets in which the firewall endpoints are located. &lt;br&gt;
2) Rule groups: A set of rules to match against VPC traffic and actions to do when a match is discovered. can create a custom rule group or use the one that is managed by AWS. The categories of rule groups are stateless and stateful. A designated subnet for a firewall endpoint is called a firewall subnet. A stateless rule examines a single network traffic packet without taking into account the context of other packets. While the inspection of network traffic packets in the context of their traffic flow is referred to as stateful rules.&lt;br&gt;
3) Monitoring: You can use the following monitoring tools with Network Firewall:&lt;br&gt;
     . Amazon CloudWatch&lt;br&gt;
     . Amazon CloudWatch Logs&lt;br&gt;
     . AWS CloudTrail&lt;br&gt;
     . AWS Config&lt;/p&gt;

&lt;p&gt;Firewall logging is only available for traffic that you route to the stateful rules engine. Traffic is forwarded to the stateful engine via stateless rule actions and default actions.&lt;br&gt;
  -&amp;gt;Using a stateful engine, you can record flow logs and alert logs.&lt;br&gt;
     . Flow logs – standard network traffic flow logs.&lt;br&gt;
     . Alert logs – report traffic that matches your &lt;br&gt;
                    stateful rules.&lt;br&gt;
Logs contain the following information:&lt;br&gt;
    - firewall-name&lt;br&gt;
    - availability-zone&lt;br&gt;
    - event-timestamp&lt;br&gt;
    - Event&lt;/p&gt;

&lt;p&gt;You can configure the destinations of your logs to various AWS services:&lt;br&gt;
     - Amazon S3&lt;br&gt;
     - CloudWatch Logs&lt;br&gt;
     - Kinesis Data Firehose&lt;br&gt;
Pricing:&lt;br&gt;
You are charged at an hourly rate for each firewall endpoint. You are charged for the amount of traffic, billed by the gigabyte, processed by the firewall endpoint. Data transferred across the AWS Network Firewall incur standard AWS data transfer fees&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/3Y4wgubopabszm7GN8u7i7XCaA--LGrP1s_1VMN91Cw/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvOXl3eG13/OXNvbWt6dGR4OWls/NTAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/3Y4wgubopabszm7GN8u7i7XCaA--LGrP1s_1VMN91Cw/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvOXl3eG13/OXNvbWt6dGR4OWls/NTAucG5n" alt="Image description" width="880" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amazon Web Services</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Fri, 02 Dec 2022 09:26:52 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/amazon-web-services-55cd</link>
      <guid>https://www.debug.school/pavanip2011_561/amazon-web-services-55cd</guid>
      <description>&lt;p&gt;Amazon Web Services: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/vgMY5quSee7pVZzmwV_jIRRK9WwnCzDn2e4Ww9EV0yw/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcW51aHg3/dzEyamFuc2xsMDN1/NjAuanBn" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/vgMY5quSee7pVZzmwV_jIRRK9WwnCzDn2e4Ww9EV0yw/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcW51aHg3/dzEyamFuc2xsMDN1/NjAuanBn" alt="Image description" width="880" height="900"&gt;&lt;/a&gt;AWS is the largest cloud computing platform, offering 200+ universally featured resources, from infrastructure to machine learning. The benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with business. With the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.&lt;/p&gt;

&lt;p&gt;Amazon Web Services provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world. With data center locations in the U.S., Europe, Brazil, Singapore, Japan, and Australia, customers across all industries are taking advantage of the following benefits:&lt;/p&gt;

&lt;p&gt;1) low cost: AWS offers low, pay-as-you-go pricing with no up-front expenses or long-term commitments. AWS is able to build and manage a global infrastructure at scale and pass the cost saving benefits in the form of lower prices.&lt;/p&gt;

&lt;p&gt;2) Agility and Instant Elasticity: AWS provides a massive global cloud infrastructure that allows one to quickly innovate, experiment and iterate. Instead of waiting weeks or months for hardware, one can instantly deploy new applications, instantly scale up as workload grows, and instantly scale down based on demand.&lt;/p&gt;

&lt;p&gt;3) Open and Flexible: AWS is a language and operating system agnostic platform. one choose the development platform or programming model that makes the most sense for business. one can choose which services to use, one or several, and choose how to use them. This flexibility allows you to focus on innovation, not infrastructure.&lt;/p&gt;

&lt;p&gt;4) Secure: AWS is a secure, durable technology platform with industry-recognized certifications and audits: PCI DSS Level 1, ISO 27001, FISMA Moderate, FedRAMP, HIPAA, and SOC 1 (formerly referred to as SAS 70 and/or SSAE 16) and SOC 2 audit reports. Our services and data centers have multiple layers of operational and physical security to ensure the integrity and safety of your data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/akXSx_u0CxnTzMwpIm_CfFhrnlRUWe2nHxvs6ODbRPQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcHdvMWp4/eDM3ZmY1Y20xOGw2/M2MucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/akXSx_u0CxnTzMwpIm_CfFhrnlRUWe2nHxvs6ODbRPQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcHdvMWp4/eDM3ZmY1Y20xOGw2/M2MucG5n" alt="Image description" width="307" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Important services provided by AWS:&lt;/p&gt;

&lt;p&gt;1) Amazon Elastic Compute Cloud:&lt;br&gt;
&lt;a href="https://www.debug.school/images/GqdbqTyuxtxRZVgch8z_nYobNAK0SA1CkZOMfkRE-g4/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvc3YzZ3Fl/dTMydHU2cXA0cHlw/bjUucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/GqdbqTyuxtxRZVgch8z_nYobNAK0SA1CkZOMfkRE-g4/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvc3YzZ3Fl/dTMydHU2cXA0cHlw/bjUucG5n" alt="Image description" width="464" height="263"&gt;&lt;/a&gt;&lt;br&gt;
 EC2 stands for Amazon Elastic Compute Cloud.&lt;br&gt;
 Amazon EC2 is a web service that provides resizable compute capacity in the cloud.&lt;br&gt;
Amazon EC2 delivers secure, reliable, high-performance, and cost-effective compute infrastructure to meet demanding business needs. Access the on-demand infrastructure and capacity you need to run HPC applications faster and cost-effectively.&lt;br&gt;
Access environments in minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing. Amazon EC2 delivers the broadest choice of compute, networking (up to 400 Gbps), and storage services purpose-built to optimize price performance for ML projects.&lt;br&gt;
Use cases:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Customers: Finra and VOlkswagen group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2) Amazon Relational Database Services:&lt;br&gt;
&lt;a href="https://www.debug.school/images/GR2dou7Un90DkAT1yQIuErbqnjKwYa7d09ds26veP2U/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZDd3eGNm/YjFmMzhhcGxoeWtv/aG0ucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/GR2dou7Un90DkAT1yQIuErbqnjKwYa7d09ds26veP2U/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZDd3eGNm/YjFmMzhhcGxoeWtv/aG0ucG5n" alt="Image description" width="395" height="250"&gt;&lt;/a&gt;&lt;br&gt;
 Amazon Relational Database Service (RDS) is a managed SQL database service provided by Amazon Web Services (AWS). Amazon RDS supports an array of database engines to store and organize data. It also helps in relational database management tasks like data migration, backup, recovery and patching.&lt;/p&gt;

&lt;p&gt;Amazon RDS facilitates the deployment and maintenance of relational databases in the cloud. Cloud administrators use Amazon RDS to set up, operate, manage, and scale relational instances of cloud databases. Amazon RDS itself is not a database; It is a service used to manage relational databases.&lt;/p&gt;

&lt;p&gt;Amazon Relational Database Service RDS is a managed relational database service that provides you six familiar database engines to choose from, including Amazon Aurora, MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS handles routine database tasks, such as provisioning, patching, backup, recovery, failure detection, and repair.&lt;br&gt;
     CUstomers: Mint and Samsung&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/eAzHJnSLq58Pz3ySiwJe_f3W3dhDo3QPSNKACRHpshY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbGU4YnFj/bGc2dXZ6Mnlqc2N6/dmMucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/eAzHJnSLq58Pz3ySiwJe_f3W3dhDo3QPSNKACRHpshY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbGU4YnFj/bGc2dXZ6Mnlqc2N6/dmMucG5n" alt="Image description" width="602" height="259"&gt;&lt;/a&gt;&lt;br&gt;
3) Amazon Simple storage Service: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/qgvprAh2JMQUmxnNDno5VqbO1boSNiIHP3Xgs1Xc1bo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvd2VpZmVi/d3Z1bTZzZHJkZDU4/ajMucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/qgvprAh2JMQUmxnNDno5VqbO1boSNiIHP3Xgs1Xc1bo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvd2VpZmVi/d3Z1bTZzZHJkZDU4/ajMucG5n" alt="Image description" width="395" height="249"&gt;&lt;/a&gt;Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere. Using this service, you can easily build applications that make use of cloud native storage&lt;br&gt;
-&amp;gt;Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and they can be constructed to mimic hierarchical attributes. Alternatively, you can use S3 Object Tagging to organize your data across all of your S3 buckets and/or prefixes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/vUHWx338EQr2hqthId1KLh9Swvt4PjXb7pLu-wbARxk/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMGswNGxm/Z3RiN3BnNXBqcWFy/aDIucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/vUHWx338EQr2hqthId1KLh9Swvt4PjXb7pLu-wbARxk/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMGswNGxm/Z3RiN3BnNXBqcWFy/aDIucG5n" alt="Image description" width="616" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   Customers: Georgia- Pacific, Zalando
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4) Amazon Lambda: AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Automatically respond to code execution requests at any scale, from a dozen events per day to hundreds of thousands per second. Save costs by paying only for the compute time you use—by per-millisecond—instead of provisioning infrastructure upfront for peak capacity. Optimize code execution time and performance with the right function memory size. Respond to high demand in double-digit milliseconds with Provisioned Concurrency.&lt;br&gt;
-&amp;gt; Each AWS Lambda function runs in its own isolated environment, with its own resources and file system view. AWS Lambda uses the same techniques as Amazon EC2 to provide security and separation at the infrastructure and execution levels.&lt;br&gt;
-&amp;gt; AWS Lambda stores code in Amazon S3 and encrypts it at rest. AWS Lambda performs additional integrity checks while your code is in use.&lt;br&gt;
Customers: Nielsen, coco cola company&lt;/p&gt;

&lt;p&gt;5) Amazon Glacier: AWS Glacier, is the backup and archival storage provided by AWS. It is an extremely low cost, long term, durable, secure storage service that is ideal for backups and archival needs. In a lot of its operation AWS Glacier is similar to S3, and, it interacts directly with S3, using S3-lifecycle policies. &lt;br&gt;
-&amp;gt; The main difference between AWS S3 and Glacier is the cost structure. The cost of storing the same amount of data in AWS Glacier is significantly less as compared to S3. Storage costs in Glacier can be as little as $1 for one petabyte of data per month&lt;br&gt;
Features of AWS Glacier&lt;br&gt;
-&amp;gt; Given the extremely cheap storage, provided by AWS Glacier, it doesn’t provide as many features as AWS S3. Access to data in AWS Glacier is an extremely slow process.&lt;br&gt;
Just like S3, AWS Glacier can essentially store all kinds of data types and objects.&lt;br&gt;
-&amp;gt; Durability: AWS Glacier, just like Amazon S3, claims to have a 99.9999999%  of durability (11 9’s). This means the possibility of losing your data stored in one of these services one in a billion. AWS Glacier replicates data across multiple Availability Zones for providing high durability.&lt;br&gt;
-&amp;gt; Data Retrieval Time: Data retrieval from AWS Glacier can be as fast as 1-5 minutes (high-cost retrieval) to 5-12 hours (cheap data retrieval).&lt;br&gt;
-&amp;gt; AWS Glacier Console: The AWS Glacier dashboard is not as intuitive and friendly as AWS S3. The Glacier console can only be used to create vaults. Data transfer to and from AWS Glacier can only be done via some kind of code. This functionality is provided via:&lt;br&gt;
    AWS Glacier API&lt;br&gt;
   AWS SDKs&lt;br&gt;
-&amp;gt; Region-specific costs: The cost of storing data in AWS Glacier varies from region to region.&lt;br&gt;
-&amp;gt; Security: AWS Glacier automatically encrypts your data using the AES-256 algorithm and manages its keys for you.&lt;br&gt;
Apart from normal IAM controls AWS Glacier also has resource policies (vault access policies and vault lock policies) that can be used to manage access to your Glacier vaults.&lt;br&gt;
-&amp;gt; Infinite Storage Capacity: Virtually AWS Glacier is supposed to have infinite storage capacity.&lt;br&gt;
          Customers: QUBE, BANDLAB&lt;/p&gt;

&lt;p&gt;6) Amazon Simple Notification Service:&lt;br&gt;
&lt;a href="https://www.debug.school/images/B3YjtczgCi2qDzvBhblWDuWD-Qhp2nl8QWxG8ckQ6V8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcWozemg0/MXUzbzJsMnBydWhx/cDkucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/B3YjtczgCi2qDzvBhblWDuWD-Qhp2nl8QWxG8ckQ6V8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvcWozemg0/MXUzbzJsMnBydWhx/cDkucG5n" alt="Image description" width="385" height="260"&gt;&lt;/a&gt; Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. &lt;br&gt;
It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications. &lt;br&gt;
It is designed to make web-scale computing easier for developers. It is designed to make web-scale computing easier for developers. &lt;br&gt;
Benefits of amazon SNS:&lt;br&gt;
. Instantaneous, push-based delivery (no polling)&lt;br&gt;
. Simple APIs and easy integration with applications&lt;br&gt;
. Flexible message delivery over multiple transport protocols&lt;br&gt;
. Inexpensive, pay-as-you-go model with no up-front costs&lt;br&gt;
. Web-based AWS Management Console offers the simplicity of a point-and-click interface&lt;/p&gt;

&lt;p&gt;Amazon Simple Notification Service (SNS) sends notifications two ways, A2A and A2P. A2A provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven server less applications. These applications include Amazon Simple Queue Service (SQS), Amazon Kinesis Data Firehose, AWS Lambda, and other HTTPS endpoints. A2P functionality lets you send messages to your customers with SMS texts, push notifications, and email. Simplify your architecture an reduce costs with message filtering, batching, ordering and deduplication. Increase message duration with archiving, delivery retires and dead letter queues.&lt;br&gt;
         Customers: Change Health Care, NASA&lt;/p&gt;

&lt;p&gt;7) Amazon Virtual Private Cloud (VPC):&lt;br&gt;
&lt;a href="https://www.debug.school/images/Luw2YmYTMOAaloC-sX-8IN99texAr6v2nzuVFpM83jY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbTE3Mm9u/eGxodGd4YnRyaGhu/eWIucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/Luw2YmYTMOAaloC-sX-8IN99texAr6v2nzuVFpM83jY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbTE3Mm9u/eGxodGd4YnRyaGhu/eWIucG5n" alt="Image description" width="414" height="257"&gt;&lt;/a&gt;&lt;br&gt;
  Amazon VPC or Amazon Virtual Private Cloud is nothing but a service that allows its users to launch their virtual machines in a protected as well as isolated virtual environment defined by them.  Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Spend less time setting up, managing, and validating your virtual network. Customize your virtual network by choosing your own IP address range, creating subnets, and configuring route tables.&lt;br&gt;
It’s applicable to organizations where the data is scattered and needs to be managed well. &lt;/p&gt;

&lt;p&gt;Amazon VPC comprises a variety of objects that will be familiar to customers with existing networks:&lt;br&gt;
Components of Amazon VPC:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A Virtual Private Cloud: A logically isolated virtual network in the AWS cloud. You define a VPC’s IP address space from ranges you select.&lt;/li&gt;
&lt;li&gt;Subnet: A segment of a VPC’s IP address range where you can place groups of isolated resources.&lt;/li&gt;
&lt;li&gt;Internet Gateway: The Amazon VPC side of a connection to the public Internet.&lt;/li&gt;
&lt;li&gt;NAT Gateway: A highly available, managed Network Address Translation (NAT) service for your resources in a private subnet to access the Internet.&lt;/li&gt;
&lt;li&gt;Virtual private gateway: The Amazon VPC side of a VPN connection.&lt;/li&gt;
&lt;li&gt;Peering Connection: A peering connection enables you to route traffic via private IP addresses between two peered VPCs.&lt;/li&gt;
&lt;li&gt;VPC Endpoints: Enables private connectivity to services hosted in AWS, from within your VPC without using an Internet Gateway, VPN, Network Address Translation (NAT) devices, or firewall proxies.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Egress-only Internet Gateway: A stateful gateway to provide egress only access for IPv6 traffic from the VPC to the Internet.&lt;/p&gt;

&lt;p&gt;Customers: TABLEAU, ATLASSIAN&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;8) Amazon Kinesis:&lt;br&gt;
&lt;a href="https://www.debug.school/images/tSgZfsA-Gbi-JtUI4k19dbf3d5u-ZhT0nGprytM36Qc/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZHltYnB1/YnFubmI1ZXk3MzZ4/ZWMucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/tSgZfsA-Gbi-JtUI4k19dbf3d5u-ZhT0nGprytM36Qc/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZHltYnB1/YnFubmI1ZXk3MzZ4/ZWMucG5n" alt="Image description" width="513" height="428"&gt;&lt;/a&gt; &lt;br&gt;
Amazon Kinesis is a service provided by Amazon Web Service which allows users to process a large amount of data (which can be audio, video, application logs, website clickstreams, and IoT telemetr)per second in real-time. &lt;br&gt;
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. &lt;br&gt;
Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. &lt;br&gt;
Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.&lt;br&gt;
Features of Amazon Kinesis&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cost-efficient: All the services provided by the amazon are cost-efficient as it follows the pay as you go model which means you have to pay for the service according to the usage, not a flat price. So it becomes advantageous for the user s that they have to pay only what they use.&lt;/li&gt;
&lt;li&gt; Integrate with other AWS services: Amazon Kinesis allows users to use the other AWS services and integrate with it. Services that can be integrated are Amazon DynamoDB, Amazon Redshift, and all the other services that deal with the large amount of data.&lt;/li&gt;
&lt;li&gt; Availability: You can access it from anywhere and anytime. Just need a good connectivity of net.&lt;/li&gt;
&lt;li&gt; Real-time processing- It allows you to work upon the data which is needed to be updated every time with changes instantaneously. Most advantageous feature of Kinesis because real-time processing becomes important when you are dealing with such a huge amount of data.
Limits of Amazon Kinesis:
. The limitation that Amazon kinesis has that it only access the stream of records log for 24 hours by default but it can extend but up to only 7 days not longer than that.
. There is no upper limit in the number of streams that can users have in their accounts.
. One shard supports up to 1000 PUT records per second.
    Customers: Zillow, Netflix&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;9) Amazon Auto- Scaling:&lt;br&gt;
&lt;a href="https://www.debug.school/images/2FXP5u83Zbdcy0jXc1ahe1yDrqQCI-OtwT6uvp1kB5s/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbWYyaWNm/dXllZGdxdWQ1eTJ2/d2YucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/2FXP5u83Zbdcy0jXc1ahe1yDrqQCI-OtwT6uvp1kB5s/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbWYyaWNm/dXllZGdxdWQ1eTJ2/d2YucG5n" alt="Image description" width="402" height="252"&gt;&lt;/a&gt; Amazon EC2 Auto Scaling helps you maintain application availability. Improve fault tolerance through automatic detection and replacement of unhealthy instances. Increase availability with predictive or dynamic scaling policies with the right amount of compute capacity. Optimize workload performance and cost by combining purchase options and instance types. Reduce the complexity of configuration changes and application deployments with instance refresh.&lt;/p&gt;

&lt;p&gt;10) Amazon Identity Access Management (IAM): With AWS Identity and Access Management (IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS.&lt;br&gt;
-&amp;gt; With AWS Identity and Access Management (IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS. Continually analyze access to right-size permissions on the journey to least privilege.&lt;br&gt;
            customers: DOW JONES, JMT&lt;/p&gt;

&lt;p&gt;11) Amazon simple Queue Service: Amazon Simple Queue Service (SQS) lets you send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. &lt;br&gt;
-&amp;gt; Eliminate overhead with no upfront costs and without needing to manage software or maintain infrastructure. Reliably deliver large volumes of data, at any level of throughput, without losing messages or needing other services to be available. &lt;br&gt;
-&amp;gt; Securely send sensitive data between applications and centrally manage your keys using AWS Key Management. &lt;br&gt;
-&amp;gt; Scale elastically and cost-effectively based on usage, so you don’t have to worry about capacity planning and reprovisioning.&lt;br&gt;
Features of SQS:&lt;br&gt;
 -&amp;gt; Increase application reliability and scale.&lt;br&gt;
 -&amp;gt; Decouple microservices and process event-driven applications.&lt;br&gt;
 -&amp;gt; Ensure work is completed cost-effectively and on time.&lt;br&gt;
 -&amp;gt; Maintain message ordering with deduplication.&lt;br&gt;
           Customers: BMW, CAPITAL ONE&lt;/p&gt;

&lt;p&gt;12) Amazon ElastiCache: Amazon Elasti Cache is a fully managed, in-memory caching service supporting flexible, real-time use cases. You can use Elasti Cache for caching, which accelerates application and database performance, or as a primary data store for use cases that don't require durability like session stores, gaming leaderboards, streaming, and analytics. Elasti Cache is compatible with Redis and Memcached. Scale with just a few clicks to meet the needs of your most demanding, internet-scale applications. Reduce costs and eliminate the operational overhead of self-managed caching.&lt;br&gt;
Features of Elasticache:&lt;br&gt;
  -&amp;gt; Access data with microsecond latency and high throughput for lightning-fast application performance.&lt;br&gt;
-&amp;gt; Cache your data to reduce pressure on your backend database, enabling higher application scalability and reducing operational burden.&lt;br&gt;
-&amp;gt; Use ElastiCache to store non-durable datasets in memory and support real-time applications with microsecond latency.&lt;br&gt;
           Customers: The Pokemon Company, Tinder&lt;/p&gt;

&lt;p&gt;13) Amazon Sagemaker: SageMaker is a full-fledged management service providing developers and data scientists with the resources to build, train, and deploy machine learning models rapidly. Use it to create highly scalable machine learning models that deploy products faster and deliver to market quickly. Access, label, and process large amounts of structured data (tabular data) and unstructured data (photo, video, geospatial, and audio) for ML. Reduce training time from hours to minutes with optimized infrastructure. Boost team productivity up to 10 times with purpose-built tools.&lt;br&gt;
                Customers: Pytorch, Tensorflow&lt;/p&gt;

&lt;p&gt;14) Amazon Lightsail: Amazon Lightsail offers easy-to-use virtual private server (VPS) instances, containers, storage, databases, and more at a cost-effective monthly price. Automatically configure networking, access, and security environments. Easily scale as you grow—or migrate your resources to the broader AWS ecosystem, such as Amazon EC2. Leverage the security and reliability of the world’s leading cloud platform.&lt;br&gt;
-&amp;gt; A Lightsail instance is a virtual private server (VPS) that lives in the AWS Cloud. Using Lightsail instances one can store data, run code, and build web-based applications or websites. Instances can connect to each other and to other AWS resources through both public (Internet) and private (VPC) networking. One can create, manage, and connect easily to instances right from the Lightsail console.&lt;br&gt;&lt;br&gt;
        Customer: Accentric, Gourmeal&lt;/p&gt;

&lt;p&gt;15) Amazon Elastic File System (EFS):&lt;br&gt;
&lt;a href="https://www.debug.school/images/g06qH8Iljs43iITbAiNbWlbjGTETK3fTjpIDlpLA0CY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMva2RoMGwz/a2VsODlrNGNxOG9k/NmoucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/g06qH8Iljs43iITbAiNbWlbjGTETK3fTjpIDlpLA0CY/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMva2RoMGwz/a2VsODlrNGNxOG9k/NmoucG5n" alt="Image description" width="166" height="196"&gt;&lt;/a&gt; Amazon EFS file systems can automatically scale from gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousands of compute instances can access an Amazon EFS file system at the same time, and Amazon EFS provides consistent performance to each compute instance. Amazon EFS is designed to be highly durable and highly available. With Amazon EFS, there is no minimum fee or setup costs, and one can pay only for what they use. &lt;br&gt;
-&amp;gt; Amazon EFS provides performance for a broad spectrum of workloads and applications: big data and analytics, media processing workflows, content management, web serving, and home directories.&lt;br&gt;
-&amp;gt; Amazon EFS Standard storage classes are ideal for workloads that require the highest levels of durability and availability.&lt;br&gt;
 Use cases:&lt;br&gt;
-&amp;gt; Share code and other files in a secure, organized way to increase DevOps agility and respond faster to customer feedback.&lt;br&gt;
-&amp;gt; Persist and share data from your AWS containers and serverless applications with zero&lt;br&gt;
management required.&lt;br&gt;
-&amp;gt; Simplify persistent storage for modern content management system (CMS) workloads.&lt;br&gt;
-&amp;gt; Easier to use and scale, Amazon EFS offers the performance and consistency needed for machine learning (ML) and big data analytics workloads.&lt;br&gt;
           Customers: Johnson and johnson, Discover&lt;/p&gt;

&lt;p&gt;16) Amazon Cloud Watch: &lt;br&gt;
&lt;a href="https://www.debug.school/images/eMl_bRRA49Qlvf5tdSMsZvAHHSZXLua1Z0G6ku4WkvQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaTVuN3Az/eXRscnhwYnUxcHJo/MmEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/eMl_bRRA49Qlvf5tdSMsZvAHHSZXLua1Z0G6ku4WkvQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaTVuN3Az/eXRscnhwYnUxcHJo/MmEucG5n" alt="Image description" width="211" height="205"&gt;&lt;/a&gt;Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Improve operational performance using alarms and automated actions set to activate at predetermined thresholds. Troubleshoot operational problems with actionable insights derived from logs and metrics in your CloudWatch dashboards.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          Customers: Pushpay, Mapbox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;17) Amazon Cloud Directory: Amazon Cloud Directory enables you to build flexible cloud-native directories for organizing hierarchies of data along multiple dimensions. With Cloud Directory, you can create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. While traditional directory solutions, such as Active Directory Lightweight Directory Services (AD LDS) and other LDAP-based directories, limit you to a single hierarchy, Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that can be navigated through separate hierarchies for reporting structure, location, and cost center.  Cloud Directory eliminates time-consuming and expensive administrative tasks, such as scaling infrastructure and managing servers.&lt;/p&gt;

&lt;p&gt;18) Amazon Cognito:&lt;br&gt;
&lt;a href="https://www.debug.school/images/G-DUO9nTq1fEhPO1buVCKQh-6w9H3-tfKqtJBWG7fcU/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZW1hMDBm/enh5eXhtNTA1enFj/ZXEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/G-DUO9nTq1fEhPO1buVCKQh-6w9H3-tfKqtJBWG7fcU/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZW1hMDBm/enh5eXhtNTA1enFj/ZXEucG5n" alt="Image description" width="438" height="208"&gt;&lt;/a&gt; With Amazon Cognito, you can add user sign-up and sign-in features and control access to your web and mobile applications. Amazon Cognito provides an identity store that scales to millions of users, supports social and enterprise identity federation, and offers advanced security features to protect your consumers and business. Built on open identity standards, Amazon Cognito supports various compliance regulations and integrates with frontend and backend development resources. Deliver frictionless customer identity and access management (CIAM) with a cost-effective and customizable platform.&lt;br&gt;
               Customers: NHS Digital,Trade Micro&lt;br&gt;
19) Amazon Elastic Beanstalk: &lt;br&gt;
&lt;a href="https://www.debug.school/images/yG7IYFU9iUGTGjxJb9dxBr2eQn_Wghz83KUV16JDJfI/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbzFrZmpm/MXN5bXM1c2JudHlj/b3AucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/yG7IYFU9iUGTGjxJb9dxBr2eQn_Wghz83KUV16JDJfI/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbzFrZmpm/MXN5bXM1c2JudHlj/b3AucG5n" alt="Image description" width="395" height="206"&gt;&lt;/a&gt;Elastic Beanstalk is a service for deploying and scaling web applications and services. Upload your code and Elastic Beanstalk automatically handles the deployment—from capacity provisioning, load balancing, and auto scaling to application health monitoring. Use adjustable settings to scale your application for handling peaks in traffic, while minimizing costs.&lt;br&gt;
features: &lt;br&gt;
-&amp;gt; Deploy scalable web applications in minutes without the complexity of provisioning and managing underlying infrastructure.&lt;br&gt;
-&amp;gt; Use your favorite programming language to build mobile API backends, and Elastic Beanstalk will manage patches and updates.&lt;br&gt;
-&amp;gt; Migrate stateful applications off legacy infrastructure to Elastic Beanstalk and connect securely to your private network.&lt;/p&gt;

&lt;p&gt;20) Amazon DynamoDB:&lt;br&gt;
&lt;a href="https://www.debug.school/images/SlSii9KI9f4qGOTib8cdACicb5W0a-4dDrGJkeAkzcs/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYTc5aTZs/eHZsNzJyZTcyd3hk/aXAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/SlSii9KI9f4qGOTib8cdACicb5W0a-4dDrGJkeAkzcs/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYTc5aTZs/eHZsNzJyZTcyd3hk/aXAucG5n" alt="Image description" width="411" height="207"&gt;&lt;/a&gt; Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. &lt;br&gt;
-&amp;gt; Build internet-scale applications supporting user-content metadata and caches that require high concurrency and connections for millions of users and millions of requests per second. &lt;br&gt;
-&amp;gt; Scale throughput and concurrency for media and entertainment workloads such as real-time video streaming and interactive content and deliver lower latency with multi-Region replication across AWS Regions. &lt;br&gt;
-&amp;gt; Use design patterns for deploying shopping carts, workflow engines, inventory tracking, and customer profiles. DynamoDB supports high-traffic, extreme-scaled events and can handle millions of queries per second. &lt;br&gt;
-&amp;gt; Focus on driving innovation with no operational overhead. Build out your game platform with player data, session history, and leaderboards for millions of concurrent users.&lt;br&gt;
      Customers: Disney, Dropbox, zoom&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Attaching EBS volume in EC2 instance</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Fri, 02 Dec 2022 09:03:39 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/attaching-ebs-volume-in-ec2-instance-247e</link>
      <guid>https://www.debug.school/pavanip2011_561/attaching-ebs-volume-in-ec2-instance-247e</guid>
      <description>&lt;p&gt;EBS: ELASTIC BLOCK STORAGE&lt;br&gt;
AWS Elastic Block Store (EBS) is Amazon’s block-level storage solution used with the EC2 cloud service to store persistent data. This means that the data is kept on the AWS EBS servers even when the EC2 instances are shut down. EBS offers the same high availability and low-latency performance within the selected availability zone, allowing users to scale storage capacity at low subscription-based pricing model. The data volumes can be dynamically attached, detached and scaled with any EC2 instance, just like a physical block storage drive. As a highly dependable cloud service, the EBS offering guarantees 99.999% availability.&lt;/p&gt;

&lt;p&gt;AWS EBS is different from the standard EC2 Instance Store, which merely provides temporary storage available on the physical EC2 host servers. The EC2 Instance Store is useful for temporary data content such as caches, buffers or files that are replicated across the hosted servers. For data that needs to be available persistently, regardless of the operating life of an EC2 instance.&lt;/p&gt;

&lt;p&gt;Steps to attach EBS 1 in EC2 instance:&lt;br&gt;
step 1: Go to console home &amp;gt; Build a solution &amp;gt; launch a virtual &lt;br&gt;
        machine with EC2&lt;br&gt;
&lt;a href="https://www.debug.school/images/sw1aDCVbEEmQq7BgEZYz25Esd1L7-EA28WnCzDj3axA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbHdkOHA5/cmdlYzRnZXI3aTNy/MHEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/sw1aDCVbEEmQq7BgEZYz25Esd1L7-EA28WnCzDj3axA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbHdkOHA5/cmdlYzRnZXI3aTNy/MHEucG5n" alt="Image description" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Click on launch instance&lt;br&gt;
&lt;a href="https://www.debug.school/images/EFH3wQpU7ZWDHQPNwRGxyDOa2XtTuiKGgD1fEW7BFzM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvb2JuOHRs/cDc0M25sZjBhN2hi/YnMucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/EFH3wQpU7ZWDHQPNwRGxyDOa2XtTuiKGgD1fEW7BFzM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvb2JuOHRs/cDc0M25sZjBhN2hi/YnMucG5n" alt="Image description" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3:  Give the EC2 instance name EBS 1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/jbshZllR55DJuN9WGwLfb7ZNem1QJxSKw90XK7_N9Mg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZ2ZhcTZ0/YjMwNW01enJ3bXdw/ZmUucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/jbshZllR55DJuN9WGwLfb7ZNem1QJxSKw90XK7_N9Mg/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZ2ZhcTZ0/YjMwNW01enJ3bXdw/ZmUucG5n" alt="Image description" width="880" height="403"&gt;&lt;/a&gt;&lt;br&gt;
Step 4: Choose the AWS image for EBS1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/f6B9xVFn1wQCW6Ljl7O3iFRL703Sn5i-uV_fMpESs8Y/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbzlzdWhm/ZTZubmR4dWxvcGR0/aHMucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/f6B9xVFn1wQCW6Ljl7O3iFRL703Sn5i-uV_fMpESs8Y/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbzlzdWhm/ZTZubmR4dWxvcGR0/aHMucG5n" alt="Image description" width="880" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: Choose an instance type&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/SehxIdkRZ2zeGBIUJiuFjYMyOe7nqUetjXGhUU5a62U/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMveWo3cmQz/M2w2MTlybGc4eWtq/MGYucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/SehxIdkRZ2zeGBIUJiuFjYMyOe7nqUetjXGhUU5a62U/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMveWo3cmQz/M2w2MTlybGc4eWtq/MGYucG5n" alt="Image description" width="880" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 6: Choose the key pair and create the key pair.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/EHxsgG-bj8sfq37uib6HO5LCb5LE9zI3meXvwepMTwM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbDEzeHNh/cXZsYWF0OHo1MXdq/c3QucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/EHxsgG-bj8sfq37uib6HO5LCb5LE9zI3meXvwepMTwM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvbDEzeHNh/cXZsYWF0OHo1MXdq/c3QucG5n" alt="Image description" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 7: Create the security group and open the port as per the requirement&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/YeF9o0NrRdc0WEt1kDwiw71_pD1KcMz_lPEH59uVD1A/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvd2V4YjVu/azg5dzBudDdlMnFs/Ym0ucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/YeF9o0NrRdc0WEt1kDwiw71_pD1KcMz_lPEH59uVD1A/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvd2V4YjVu/azg5dzBudDdlMnFs/Ym0ucG5n" alt="Image description" width="880" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 8: click on the add volume for extra EBS1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/zBaIT2Yq4D1YjiMJQqwXYeGgypF-2R903Z5_Ivp2HoM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaTByMmh5/aTRuNzZvOXhsdXFh/c3QucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/zBaIT2Yq4D1YjiMJQqwXYeGgypF-2R903Z5_Ivp2HoM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaTByMmh5/aTRuNzZvOXhsdXFh/c3QucG5n" alt="Image description" width="880" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 9: Configure EBS Size&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/ngLY-TqE7yeEeigE0iyCs8QqPHyly_4fVl4R4JmQlhQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZmE0ZGd4/dmowdzVsbzQ1MGpz/bHUucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/ngLY-TqE7yeEeigE0iyCs8QqPHyly_4fVl4R4JmQlhQ/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZmE0ZGd4/dmowdzVsbzQ1MGpz/bHUucG5n" alt="Image description" width="880" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 10: Select EBS size as per Requirement&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/OHA9Q4-uZgP8VGTViX5VYpRj-XrRLWQQqULfF9ytuu8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNDV5d2ox/dXBzZms1anFxc2Jn/bmUucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/OHA9Q4-uZgP8VGTViX5VYpRj-XrRLWQQqULfF9ytuu8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNDV5d2ox/dXBzZms1anFxc2Jn/bmUucG5n" alt="Image description" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the AWS EBS offers the following storage volume options:&lt;/p&gt;

&lt;p&gt;General Purpose SSD (gp2): An optimum balance between cost and performance for a variety of IT workloads. Use cases include virtual desktops, apps, dev and test environments, among others.&lt;/p&gt;

&lt;p&gt;Provisioned IOPS SSD (io1): The high-performance functionality serves particularly well for mission-critical IT workloads. Suitable use cases include large databases and business apps that require 16,000 IOPS or 250 MiB/s of throughput per volume.&lt;/p&gt;

&lt;p&gt;Throughput Optimized HDD (st1): A low cost alternative for large storage volume workloads with high performance throughput requirements. Examples include streaming workloads, big data applications, log processing and data warehousing.&lt;/p&gt;

&lt;p&gt;Cold HDD (sc1): An inexpensive alternative for use cases with a requirement to maintain minimal cost for large volume data storage. Examples include workloads that are accessed less frequently.&lt;/p&gt;

&lt;p&gt;Step 11: Click launch instance button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/Mi-eWQy-I0u1VsBxVmmSRPMzDTCRbBox3Aznw4BrVp0/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZmxhZW1m/bmQ2cWJqZDg0Z2Vv/MWcucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/Mi-eWQy-I0u1VsBxVmmSRPMzDTCRbBox3Aznw4BrVp0/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvZmxhZW1m/bmQ2cWJqZDg0Z2Vv/MWcucG5n" alt="Image description" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the Instances screen, we can view the status of the launch. It takes a short time for an instance to launch. When you launch an instance, its initial state is pending. After the instance starts, its state changes to running and it receives a public DNS name&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/so5bDMfV-Eg1enQmUnSSM72I7qvo-RtOPNBU7ysURMo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMGVjdWM3/eTl2bXIxYW43aG5h/eTMucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/so5bDMfV-Eg1enQmUnSSM72I7qvo-RtOPNBU7ysURMo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMGVjdWM3/eTl2bXIxYW43aG5h/eTMucG5n" alt="Image description" width="880" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Files vs Objects vs Blocks</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Thu, 01 Dec 2022 09:17:35 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/files-vs-objects-vs-blocks-5c8</link>
      <guid>https://www.debug.school/pavanip2011_561/files-vs-objects-vs-blocks-5c8</guid>
      <description>&lt;p&gt;Files, Blocks and objects are storage formats that hold, organize, and present data in different ways- each with their own capabilities and limitations. File storage organizes and represents data as a hierarchy of files in folders, block storage chunks data into arbitrarily organized, evenly sized volumes, and object storage manages data and links it to associated metadata.&lt;br&gt;
Containers are highly flexible and bring incredible scale to how apps and storage are delivered.&lt;/p&gt;

&lt;p&gt;file storage:&lt;br&gt;
File storage also called as file level or file based storage. Data is stored as a single piece of information inside a folder. when one need to access that data computer need to know the path to find it. Data stored in files is organized and retrieved using a limited amount of metadata that tells computer where the file is kept.it is like a library card catalog for data files. This is the oldest and widely used data storage system for direct and network attached storage system. file storage has broad capabilities and can store just above anything. Its great for storing an array of complex files and is fairly for user to navigate. File based storage must scale up by adding more systems than scale up by adding more capacity.&lt;br&gt;
Block storage:&lt;br&gt;
Block storage chops data into blocks and stores them as separate pieces. Each block of data is given a unique identifier which allows a storage system to place the smallest pieces of data wherever is convenient. Block storage is often configured to decouple the data from the users environment and spread it across multiple environments that can better serve the data. and when data is requested the underlying storage software reassembles the blocks of data from these environments and presents them back to the user. It is usually deployed in storage area network environments and must be tied to a functioning server.&lt;/p&gt;

&lt;p&gt;Block storage does not rely on a single path like file storage. Each block lives on its own and can be partitioned so it can be partitioned so it can be accessed in different operating system which gives the user complete freedom to configure their data.it works well with big transactions and those deploy huge databases i.e more data to store it will be better with block storage.&lt;/p&gt;

&lt;p&gt;It has disadvantages like it is more expensive, it has limited capability to handle metadata which means it needs to be dealt with in the application or data base level.&lt;/p&gt;

&lt;p&gt;Object storage:&lt;/p&gt;

&lt;p&gt;Object storage is a flat structure in which files are broken into pieces and spread out among hardware. In this storage the data is broken into discrete units called objects and is kept in a single repository. Object storage volumes work as modular units. Each is self contained repository that owns the data an unique identifier that allows the object to be found over a distributed system and the metadata that describes the data. The metadata is important and includes details like age, privacy and securities and access contingencies. Object storage metadata can also be extremely detailed and is capable of storing information in where a video was shot, what camera was used.&lt;br&gt;
Object storage requires a simple HTTP application which is used by most clients in all languages. It is cost efficient. Its a storage system well suited for static data and its agility and flat nature means it can scale to extremely large quantities of data. They ae good at storing unstructured data.&lt;/p&gt;

&lt;p&gt;there are disadvantages like objects cant be modified. This storage cant work on traditional databases because writing databases is a slow process and writing an app to use an object storage is not as simple as using file storage. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/YabKETr0pIM5fBg6q8v--TiMvZoOG_1DJbTX2-1Qjoo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYTh4b3Ft/dTduenpjdXF3MHZz/NWgucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/YabKETr0pIM5fBg6q8v--TiMvZoOG_1DJbTX2-1Qjoo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYTh4b3Ft/dTduenpjdXF3MHZz/NWgucG5n" alt="Image description" width="510" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/vpdtxm0FXj_nqi8j3o_cPoXRmsCbXnx1ndzTbEPCVxo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYjBvczhj/bW04dGRyZjlwYWt3/dGIucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/vpdtxm0FXj_nqi8j3o_cPoXRmsCbXnx1ndzTbEPCVxo/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvYjBvczhj/bW04dGRyZjlwYWt3/dGIucG5n" alt="Image description" width="880" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Public and Private ip</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Thu, 01 Dec 2022 07:43:37 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/public-and-private-ip-5b1d</link>
      <guid>https://www.debug.school/pavanip2011_561/public-and-private-ip-5b1d</guid>
      <description>&lt;p&gt;IP Address: An IP (Internet Protocol) address is a numerical label assigned to the devices connected to a computer network that uses the IP for communication.&lt;/p&gt;

&lt;p&gt;IP address act as an identifier for a specific machine on a particular network. It also helps you to develop a virtual connection between a destination and a source. The IP address is also called IP number or internet address. It helps you to specify the technical format of the addressing and packets scheme. Most networks combine TCP with IP.&lt;br&gt;
IP Address is divided into two parts:&lt;/p&gt;

&lt;p&gt;-&amp;gt; Prefix: The prefix part of IP address identifies the physical network to which the computer is attached. . Prefix is also known as a network address.&lt;br&gt;
-&amp;gt; Suffix: The suffix part identifies the individual computer on the network. The suffix is also called the host address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private IP&lt;/strong&gt;: the scope of private is local.it is used to communicate within the network. private IP address of the systems connected in a network differ in a uniform manner. it works only on LAN. it is used to load the network operating system. it is available free of cost. private IP can be known by entering ipconfig on the command prompt. Private IP uses numeric code that is not unique and can be used again. private IP addresses are secure. private IP addresses require NAT to communication with devices.&lt;br&gt;
range : 10.0.0.0- 10.255.255&lt;br&gt;
172.16.0.0 - 172.31.255.255&lt;br&gt;
192.168.0.0- 192.168.255.255&lt;br&gt;
Example:192.168.1.10&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;public IP&lt;/strong&gt;: the scope of public IP is global. It is used to communicate outside the network. public IP may differ in uniform or non uniform manner. it is used to ger internet service. It is controlled by ISP. It is not free of cost. Public IP can be k own by searching "what is my IO" on google. besides private IP addresses the rest are public. public IP uses a numeric code that is unique and cannot be used by other. Public IP address has no security and is subjected to attack. Public IP does not require a network translation.&lt;br&gt;
example:17.5.7.8&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Region and availablity zone</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Thu, 01 Dec 2022 07:07:37 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/region-and-availablity-zone-8d5</link>
      <guid>https://www.debug.school/pavanip2011_561/region-and-availablity-zone-8d5</guid>
      <description>&lt;p&gt;AWS Regions:&lt;br&gt;
AWS Regions are separate geographic areas that AWS uses to house its infrastructure. These are distributed around the world so that customers can choose a region closest to them in order to host their cloud infrastructure there. The closer the region is the better, to reduce network latency as much as possible for your end-us and for fast service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each AWS Region is designed to be isolated from the other AWS Regions. This design achieves the greatest possible fault tolerance and stability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/6JqWAPZMDduKxRSyyYhvEAbiZqQuYEQSladFYHtCxzM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMmIwZnVu/dGtpYWMyZnRnYTFs/N2cucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/6JqWAPZMDduKxRSyyYhvEAbiZqQuYEQSladFYHtCxzM/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvMmIwZnVu/dGtpYWMyZnRnYTFs/N2cucG5n" alt="Image description" width="456" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In general, try to follow these best practices when you choose a region, to ensure top performance and resilience:&lt;br&gt;
-&amp;gt; Proximity: Choose a region closest to your location and your customers’ location to optimize network latency.&lt;br&gt;
-&amp;gt; Services: Try and think about what your most needed services are. Usually, the newest services start on a few main regions then pop up in other regions later.&lt;br&gt;
-&amp;gt; Cost: Certain regions will cost more than others, so use built-in AWS calculators to do rough cost estimates to inform your choices.&lt;br&gt;
-&amp;gt; Service Level Agreement (SLA): Just as with cost, your SLA details will vary by region, so be sure to be aware of what your needs are and if they’re being met.&lt;br&gt;
-&amp;gt; Compliance: You may need to meet regulatory compliance needs such as GDPR by hosting your deployment in a specific — or multiple regions.&lt;/p&gt;

&lt;p&gt;Availablity Zone: An AWS Availability Zone (AZ) is the logical building block that makes up an AWS Region. Each AWS Region has multiple, isolated locations known as Availability Zones. There are currently 69 AZs, which are isolated locations— data centers — within a region. &lt;br&gt;
AWS Availability Zones give you the flexibility to launch production apps and resources that are highly available, resilient/fault-tolerant, and scalable as compared to using a single data center&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/hbqZzl3myV3iSmtiIEAk1KDOF1Ic3eHorAH_VmkConA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaG50a3Q3/eWtsMWx0bGt1c3N3/YXMucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/hbqZzl3myV3iSmtiIEAk1KDOF1Ic3eHorAH_VmkConA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvaG50a3Q3/eWtsMWx0bGt1c3N3/YXMucG5n" alt="Image description" width="469" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>DNS server record types</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Thu, 01 Dec 2022 05:05:58 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/dns-server-record-types-2ac5</link>
      <guid>https://www.debug.school/pavanip2011_561/dns-server-record-types-2ac5</guid>
      <description>&lt;p&gt;&lt;strong&gt;DNS&lt;/strong&gt;: Domain Name System. it translates domain names to IP addresses so browsers can load internet resources. Each device connected to the Internet has a unique IP address which other machines use to find the device. DNS servers eliminate the need for humans to memorize IP addresses such as 192.168.1.1 (in IPv4), or more complex newer alphanumeric IP addresses such as 2400:cb00:2048:1::c629:d7a2 (in IPv6).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DNS record&lt;/strong&gt;s: these records provide important information about a hostname or domain. these records include th current IP address for a domain. these records are stored in text files on the authoritative DNS server.&lt;br&gt;
There are 5 major types of DNS records:&lt;br&gt;
&lt;strong&gt;A type&lt;/strong&gt;: the " A" in record stands for address. An A record shows the specific hostname or domain. The A record supports only IPV4 addresses.&lt;br&gt;
Use of A record: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is used for IP address lookup. &lt;/li&gt;
&lt;li&gt; by using the A record web browser is able to load a website using the domain name.&lt;/li&gt;
&lt;li&gt;Domain name system based blackhole list (DNSBL) &lt;/li&gt;
&lt;li&gt;&lt;p&gt;A record is used block mail from spam sources &lt;br&gt;
&lt;strong&gt;AAAA record&lt;/strong&gt;: it points to IPV6 addresses.&lt;br&gt;
use of AAAA record:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;these records are used to resolve a domain name to the newer IPV6 protocol address&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;these records are used for DNS resolution&lt;br&gt;
&lt;strong&gt;CNAME record&lt;/strong&gt;: Canonical name is record that points a domain name to another domain. in CNAME record alias doesnot point to an IP address and the domain name that the alias points to is the canonical name.&lt;br&gt;
Use of CNAME record: &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;running of multiple subdomains for different purposes on the same server.&lt;br&gt;
&lt;strong&gt;NS Record&lt;/strong&gt;: A nameserver (NS) record specifies the authoritative DNS server for a domain. in general, the NS record helps point to where internet applications like a web browser can find the ip address of a domain name&lt;br&gt;
&lt;strong&gt;MX Record&lt;/strong&gt;: Mail exchange record is a DNS record typw which shows where emails for a domain should be routed to. this record makes possible to direct emails to mail server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use of MX record:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is used to hand off emails to a dedicated email server.
*&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Network, subnet, internet gateway, route table</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Wed, 30 Nov 2022 12:14:36 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/aws-assignment-14do</link>
      <guid>https://www.debug.school/pavanip2011_561/aws-assignment-14do</guid>
      <description>&lt;p&gt;NETWORK: A network consists of two or more computers that are linked in order to share resources, exchange files or allow electronic communications. The computer on a network are connected through cables, telephone lines, radio waves, satellites, or Infrared beams.&lt;br&gt;
Benefits of a Network:&lt;/p&gt;

&lt;p&gt;-&amp;gt; Information sharing – Authorized users can use other computers on the network to access and share information and data. This could include special group projects, databases, etc.&lt;br&gt;
 -&amp;gt; Hardware sharing – One device connected to a network, such as a printer or a scanner, can be shared by many users.&lt;br&gt;
 -&amp;gt; Software sharing – Instead of purchasing and installing a software program on each computer, it can be installed on the server. All of the users can then access the program from a single location.&lt;br&gt;
 -&amp;gt; Collaborative environment – Users can work together on group projects by combining the power and capabilities of diverse equipment.&lt;br&gt;
Risks of networking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Equipment malfunctions&lt;/li&gt;
&lt;li&gt;System failures&lt;/li&gt;
&lt;li&gt;Computer hackers&lt;/li&gt;
&lt;li&gt;Virus attacks
Types of network:
1) Local Area Network (LAN): It is usually privately owned and links the devices in a single office, building, or campus. Its size is limited to a few kilometers. It is designed to allow resources (h/w, s/w or data) to be shared between personal computers or workstations. In general, a given LAN will use only one type of transmission medium. The most common LAN topologies are bus, ring, and star.
2) Metropolitan Area Network (MAN): It is designed to extend over an entire city. A company can use MAN to connect the LANs in all its offices throughout a city. Maybe wholly owned and operated by a private company or it may be service provided by a public company (local telephone company).
3) Wide Area Network (WAN): It provides long-distance transmission of data over a country, a continent, or even the worldwide. Maybe wholly owned and operated by a single company is referred to as an enterprise network.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SUBNET: A subnet is a segmented piece of a larger network. Subnets are a logical partition of an IP network into multiple, smaller network segments.  One goal of a subnet is to split a large network into a grouping of smaller, interconnected networks to help minimize traffic. Subnetting, the segmentation of a network address space, improves address allocation efficiency.&lt;br&gt;
-&amp;gt; Each subnet allows its connected devices to communicate with each other, while routers are used to communicate between subnets. The size of a subnet depends on the connectivity requirements and the network technology employed. A point-to-point subnet allows two devices to connect, while a data center subnet might be designed to connect many more devices.&lt;br&gt;
Uses of Subnets: &lt;br&gt;
  1) Reallocating IP addresses. Each class has a limited number of host allocations; for example, networks with more than 254 devices need a Class B allocation. If a network administrator is working with a Class B or C network and needs to allocate 150 hosts for three physical networks located in three different cities, they would need to either request more address blocks for each network -- or divide a network into subnets that enable administrators to use one block of addresses on multiple physical networks.&lt;br&gt;
  2)Relieving network congestion. If much of an organization's traffic is meant to be shared regularly between the same cluster of computers, placing them on the same subnet can reduce network traffic. Without a subnet, all computers and servers on the network would see data packets from every other computer.&lt;br&gt;
 3) Improving network security. Subnetting allows network administrators to reduce network-wide threats by quarantining compromised sections of the network and by making it more difficult for trespassers to move around an organization's network.&lt;/p&gt;

&lt;p&gt;INTERNET GATEWAY: An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. It supports IPv4 and IPv6 traffic. It does not cause availability risks or bandwidth constraints on your network traffic. &lt;br&gt;
ROUTE TABLE: A route table is a data table with a set of rules used to determine where data packets travelling over an internet protocol network will be directed. All IP enabled devices including routers and switches use route tables.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Policy and Permission</title>
      <dc:creator>Pavani</dc:creator>
      <pubDate>Wed, 30 Nov 2022 12:13:08 +0000</pubDate>
      <link>https://www.debug.school/pavanip2011_561/aws-assignment-587p</link>
      <guid>https://www.debug.school/pavanip2011_561/aws-assignment-587p</guid>
      <description>&lt;p&gt;A policy is an object in AWS when associated with an identity or resource defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permission in the policies determine the request is allowed or denied.&lt;br&gt;
 -&amp;gt; IAM policies define permissions for an action regardless of the method that you use to perform the operation.&lt;br&gt;
 -&amp;gt; AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.&lt;br&gt;
Policy types:&lt;br&gt;
The following policy types, listed in order from most frequently used to less frequently used, are available for use in AWS. &lt;br&gt;
   .Identity-based policies – Attach managed and inline policies to IAM identities (users, groups to which users belong, or roles). Identity-based policies grant permissions to an identity.&lt;br&gt;
  . Resource- based policies: Attach inline policies to resources. The most common examples of resource-based policies are Amazon S3 bucket policies and IAM role trust policies. Resource-based policies grant permissions to the principal that is specified in the policy. Principals can be in the same account as the resource or in other accounts.&lt;/p&gt;

&lt;p&gt;. Permissions boundaries – Use a managed policy as the permissions boundary for an IAM entity (user or role). That policy defines the maximum permissions that the identity-based policies can grant to an entity but does not grant permissions. Permissions boundaries do not define the maximum permissions that a resource-based policy can grant to an entity.&lt;/p&gt;

&lt;p&gt;. Organizations SCPs – Use an AWS Organizations service control policy (SCP) to define the maximum permissions for account members of an organization or organizational unit (OU). SCPs limit permissions that identity-based policies or resource-based policies grant to entities (users or roles) within the account, but do not grant permissions.&lt;/p&gt;

&lt;p&gt;. Access control lists (ACLs) – Use ACLs to control which principals in other accounts can access the resource to which the ACL is attached. ACLs are similar to resource-based policies, although they are the only policy type that does not use the JSON policy document structure. ACLs are cross-account permissions policies that grant permissions to the specified principal. ACLs cannot grant permissions to entities within the same account.&lt;/p&gt;

&lt;p&gt;. Session policies – Pass advanced session policies when you use the AWS CLI or AWS API to assume a role or a federated user. Session policies limit the permissions that the role or user's identity-based policies grant to the session. Session policies limit permissions for a created session, but do not grant permissions. For more information, see Session Policies.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
