Showing posts with label programming. Show all posts
Showing posts with label programming. Show all posts

Saturday, March 23, 2024

Replace characters in folder names using Windows PowerShell

This article describes a method to rename folders by replacing a character with another. In this use case we have a folder full of subfolders which are named in a particular pattern with a date followed by the description of the folder. The words in the folder names are separated by the hyphen ("-') character.  Our objective is to keep the date part intact while replacing all the hyphens after the data with whitespace characters.

The before and after state of the folder names are illustrated in the screenshot below.

Renamed folders with hyphens replaced with whitespace characters

For this example, we consider that our bunch of folders are in a local drive path named "C:\Users\SampleUser\Desktop\2006".

Open the Windows PowerShell and change directory to "C:\Users\SampleUser\Desktop\2006". Next run the following script at the prompt. Remember to set the value of $directoryPath below to the appropriate path according to your situation:

# Set the directory path where you want to rename folders
$directoryPath = "C:\Users\SampleUser\Desktop\2006"

# Get a list of all folders in the specified directory
$folders = Get-ChildItem -Path $directoryPath -Directory

# Loop through each folder
foreach ($folder in $folders) {
    # Check if folder name is longer than 10 characters and contains '-'
    # character at position 11
    if ($folder.Name.Length -ge 11 -and $folder.Name.Substring(10, 1) `
    -eq "-") {
        # Construct the new folder name by replacing '-' with a blank
        # space starting at character number 11
        $newName = $folder.Name.Substring(0, 10) + `
        ($folder.Name.Substring(10) -replace '-', ' ')
       
        # Get the full path of the old and new folder names
        $oldPath = Join-Path -Path $directoryPath -ChildPath $folder.Name
        $newPath = Join-Path -Path $directoryPath -ChildPath $newName
       
        # Rename the folder
        Rename-Item -Path $oldPath -NewName $newName -ErrorAction SilentlyContinue
        Write-Host "Renamed folder: $($folder.Name) to $($newName)"
    }
}

On successful execution of this script, you should see an output like this.

Renamed folder: 2019-01-16-Travel-to-France to 2019-01-16 Travel to France
Renamed folder: 2019-03-07-Meijer to 2019-03-07 Meijer
Renamed folder: 2019-05-04-National-Train-Day to 2019-05-04 National Train Day
Renamed folder: 2019-05-18-Woodland-Park to 2019-05-18 Woodland Park
Renamed folder: 2019-05-26-Maumee-Bay-State-Park to 2019-05-26 Maumee Bay State Park
Renamed folder: 2019-07-05-African-Safari-Wildlife-Park to 2019-07-05 African Safari Wildlife Park

Now you are done.


Saturday, October 14, 2023

How to get stock prices from Google Finance in Google Sheets

Google Sheets has a very convenient feature to grab stock prices and other details from its Google Finance service and display in spreadsheets.

Photo by Oren Elbaz on Unsplash

It not only lets you get the latest real-time stock prices but also gets historical prices. This feature can be an interesting add-on to your financial spreadsheets or to build a tracker for your holdings or just to build a simple watchlist.

Get the real-time price of a stock

The simplest way to get the real-time stock prices is to use the =GOOGLEFINANCE() function with just the stock ticker or symbol using the following syntax.

=GOOGLEFINANCE(ticker)

For example, writing the following in a cell would display the current stock price of Apple.

=GOOGLEFINANCE("AAPL")

In Google Sheets, it looks like this:

The =GOOGLEFINANCE() function also accepts several attributes that can be used to get more information about a stock including the name, current and historical prices.

We will get into more examples of using attributes later, but for an example, using the "name" attribute like =GOOGLEFINANCE("AAPL""name") returns the name of the company for the stock ticker.


Creating a watchlist

You can easily create a list of stocks that you want to monitor using the real-time attributes that are available by default with the =GOOGLEFINANCE() function. To use an attribute just add the attribute as a parameter in the function. For example, to get the market capitalization, use something like:

=GOOGLEFINANCE("AAPL""marketcap")

In Google Sheets, it looks like this:


Here are some attributes that can be useful for building a watchlist. For the latest and full list of attributes, refer to this page on Google Support.

  • "price" - Real-time price quote, delayed by up to 20 minutes. This is a default attribute.
  • "priceopen" - The price as of market open.
  • "high" - The current day's high price.
  • "low" - The current day's low price.
  • "volume" - The current day's trading volume.
  • "marketcap" - The market capitalization of the stock.
  • "tradetime" - The time of the last trade.
  • "datadelay" - How far delayed the real-time data is.
  • "volumeavg" - The average daily trading volume.
  • "pe" - The price/earnings ratio.
  • "eps" - The earnings per share.
  • "high52" - The 52-week high price.
  • "low52" - The 52-week low price.
  • "change" - The price change since the previous trading day's close.
  • "beta" - The beta value.
  • "changepct" - The percentage change in price since the previous trading day's close.
  • "closeyest" - The previous day's closing price.
  • "shares" - The number of outstanding shares.
  • "currency" - The currency in which the security is priced.

Using these attributes, a watchlist can be built that looks like this:

Ticker or symbol for a particular stock

The easiest way to get the ticker for a particular stock is to perform a search in your favorite search engine with the name of the company followed by the word "stock".

For example, in Google, search with the keywords "Apple stock" to get the result shown below where the stock ticker is listed as NASDAQ: AAPL.

Now use the symbol in a Google Sheets formula like =GOOGLEFINANCE("NASDAQ:GOOG") to get the current price. Notice that we have removed any blank spaces in the ticker name.

Notice that the symbol contains two parts which are separated by a colon. The part on the left of the colon is the designated exchange code and the part on the right is the symbol or ticker. For US stocks, Google Sheets automatically grabs the prices if you just use the ticker like =GOOGLEFINANCE("AAPL") but for international stocks it is best to specify the designated exchange code along with the ticker with no space in between. To get accurate results and avoid discrepancies it is recommended to always use the designated exchange code along with the ticker like =GOOGLEFINANCE("NASDAQ:AAPL").

To get the ticker of Tata Steel which is listed in the National Stock Exchange in India, a search result would return something like what is shown below.

Now use the symbol in a Google Sheets formula like =GOOGLEFINANCE("NSE:TATASTEEL") to get the current price.


A similar example is the Tessenderlo Group which is listed in the Brussels Stock Exchange, for which you can use =GOOGLEFINANCE("EBR:TESB".

Historical stock price

The historical price of stocks can aid in the analysis for choosing the right pick. The =GOOGLEFINANCE() function can also be used to get this historical data by using these attributes.

  • "open" - The opening price for the specified date(s).
  • "close" - The closing price for the specified date(s).
  • "high" - The high price for the specified date(s).
  • "low" - The low price for the specified date(s).
  • "volume" - The volume for the specified date(s).
  • "all" - All of the above.

To get the list of prices for a stock from a start date to an end date use the following syntax:

=GOOGLEFINANCE(ticker,[attribute],[start date],[end date | number of days],"DAILY")

For example, the following formula gets the daily closing price of the stock between two dates:

=GOOGLEFINANCE("NASDAQ:AAPL","price",DATE(2020,1,1),DATE(2020,1,14),"DAILY")

The following formula can be used to get the closing price of a stock for 8 trading days starting from a particular date:

=GOOGLEFINANCE("NASDAQ:AAPL","price",DATE(2020,1,1),8,"DAILY")

This formula can be used to get the closing price on trading days since the last 30 days:

=GOOGLEFINANCE("NASDAQ:AAPL","price", TODAY()-30), TODAY(),"DAILY")

The following formula will get all the historical attributes of a stock for 8 trading days starting from a particular date:

=GOOGLEFINANCE("NASDAQ:AAPL", "all", DATE(2020,1,1),8,"DAILY")

Charting Historical Prices using SPARKLINE

The =SPARKLINE() function can be used in combination to historical stock prices from =GOOGLEFINANCE() to display a chart in Google Sheets. The following formula will display a chart plotting the price trend over the last 90 days for the Apple stock.

=SPARKLINE(GOOGLEFINANCE("NASDAQ:AAPL", "price",TODAY()-90, TODAY()))

Sparklines can be used to enhance your watchlist to provide a miniature visual on how the prices are trending.


Mutual Fund Prices

The =GOOGLEFINANCE() function can also be used to grab prices of mutual funds. There are several attributes to provide this functionality.

  • "closeyest" - The previous day's closing price.
  • "date" - The date at which the net asset value was reported.
  • "returnytd" - The year-to-date return.
  • "netassets" - The net assets.
  • "change" - The change in the most recently reported net asset value and the one immediately prior.
  • "changepct" - The percentage change in the net asset value.
  • "yieldpct" - The distribution yield, the sum of the prior 12 months' income distributions (stock dividends and fixed income interest payments) and net asset value gains divided by the previous month's net asset value number.
  • "returnday" - One-day total return.
  • "return1" - One-week total return.
  • "return4" - Four-week total return.
  • "return13" - Thirteen-week total return.
  • "return52" - Fifty-two-week (annual) total return.
  • "return156" - 156-week (3-year) total return.
  • "return260" - 260-week (5-year) total return.
  • "incomedividend" - The amount of the most recent cash distribution.
  • "incomedividenddate" - The date of the most recent cash distribution.
  • "capitalgain" - The amount of the most recent capital gain distribution.
  • "morningstarrating" - The Morningstar "star" rating.
  • "expenseratio" - The fund's expense ratio.

For example, to get the last closing price of the Schwab S&P 500 Index Fund, use something like:

=GOOGLEFINANCE("MUTF:SWPPX""closeyest")

A combination of these attributes can be used to create a watchlist that looks like this.


Cryptocurrency Prices

There are limited ways that you can use the =GOOGLEFINANCE() function to grab real-time prices of some cryptocurrencies. At the time of this writing, it works with Bitcoin and Ethereum. To do this use the cryptocurrency symbol that you see in the URL as shown below.


From the URL shown in the example above, you can take the BTC-USD ticker and use it directly like this in a formula.

=GOOGLEFINANCE("BTC-USD")

This will pull the price and display in a cell in Google Sheets.


Conclusion

To sum up, monitoring stocks, mutual funds and cryptocurrency within Google Sheets is a potent method for analyze your holdings. It is user-friendly, adaptable, and doesn't cost a dime. Consequently, you can promptly access both historical and real-time stock information on your spreadsheet. I hope this article holds some helpful information for you.

Here is this posted in LinkedIn.


Sunday, September 24, 2023

Find files with path lengths exceeding CD/DVD restrictions using Windows PowerShell

If you have been burning CDs or DVDs for your personal archives or any other purposes, you already know it allows file paths only up to 255 characters. If you are burning files in bulk it an often difficult to ensure that your file path lengths confirm to the prescribed limits. Or if you are like me, then you probably want to limit the path length to 100-150 characters for manageability and protection against burn failures.

For this example, we consider a bunch of subfolders and files in a local drive path named "D:\MUSIC-Full-Backup\BACKUP0933\ARCHIVE05". We intend to burn the content inside this folder on the root of a DVD disc. For this purpose, we want to know which files here breach the 100-character limit of path length so we can take appropriate action to correct this.

Open the Windows PowerShell and change directory to "D:\MUSIC-Full-Backup\BACKUP0933\ARCHIVE05". Next run the following command at the prompt:

cmd /c dir /s /b *.* |?$curPath="D:\MUSIC-Full-Backup\BACKUP0933\ARCHIVE05"; $_.length-$curPath.length -gt 100 }

Here we are deducting the length of the current path from the full path of each file as it will not be part of the patch when it is burned the DVD. This should list all files in the folder which exceeds the length of 100 characters. An example output is below.

D:\MUSIC-Full-Backup\BACKUP0933\ARCHIVE05\Greatest Collection of the Century\JD YNK ULEATOBS NKY ULEATOBS\ARH OKIKIS VS LSNEIA LYTFTUREB JD KYN MAHUSP IXM.mp3
D:\MUSIC-Full-Backup\BACKUP0933\ARCHIVE05\Greatest Collection of the Century\JD YNK ULEATOBS NKY ULEATOBS\REMAD NO VS UHJT EINM ARB ILIONV TEEHM IMX FT SPEENAD.mp3
D:\MUSIC-Full-Backup\BACKUP0933\ARCHIVE05\Greatest Collection of the Century\JD YNK ULEATOBS NKY ULEATOBS\UHTJE HULBA IADY RESROPIVSEG NRETAC IMX.mp3

Now you can go to the individual folder and plan your strategy for shortening the path lengths.

Tuesday, September 19, 2023

Using the Windows PowerShell to bulk rename files by stripping a part from the filename

This post demonstrates how to rename files in bulk by stripping off the same part of the filename from each file by using the Windows PowerShell.

For the sake of an example, the below screenshot shows a list of MP3 files from which we want to remove artist names. The part of the filename that we want to remove is marked in red.




This can easily be achieved with the PowerShell by using a command like the one below. Remember to take a backup of the folder just in case you need to restore later. Before executing the command in the PowerShell, change directory to the folder in which the files reside:


get-childitem *.mp3 | foreach { rename-item $_ $_.Name.Replace("Paul Revere & The Raiders - ", "") }


The command executes silently and does a neat job in cleaning up the file names. Here is the result.



Saturday, September 9, 2023

How to count the number of values in a comma separated (CSV) string in SQL

As the title states, this post demonstrates how to count the number of values in a comma separated string in SQL. The method we use here is to count the number of characters in the string minus the number of characters in the string after all the commas are removed, plus 1. This technique would also work if there are empty values in the CSV string.

On SQL Server you can use something like this:

DECLARE @CSVStr NVARCHAR(50);
SET @CSVStr = N'empid,fname,mane,lname,age,addr,dob,doj';

DECLARE @ValueCount INT;

-- Length of string minus length of string after all commas are removed
SET @ValueCount = LEN(@CSVStr) - LEN(REPLACE(@CSVStr, ',', '')) + 1;

SELECT @ValueCount AS ValCount;

Output:

8

To do the same in MySQL use the following (tested on version 8.0.30-0ubuntu0.20.04.2):

SET @CSVStr = 'empid,fname,mane,lname,age,addr,dob,doj';

-- Length of string minus lengh of string after all commas are removed
SET @ValueCount = CHAR_LENGTH(@CSVStr) - CHAR_LENGTH(REPLACE(@CSVStr, ',',
    '')) + 1;

SELECT @ValueCount AS ValCount;

Output:

8

On a proprietary implementation of SQL like the Aspen SQLplus v14, the following would work:

LOCAL CSVStr CHAR(50);
CSVStr = 'empid,fname,mane,lname,age,addr,dob,doj';

LOCAL ValueCount INT;

-- Length of string minus lengh of string after all commas are removed
ValueCount = CHARACTER_LENGTH(CSVStr) -
             CHARACTER_LENGTH(REPLACE(',' WITH '' IN CSVStr)) + 1;

WRITE ValueCount;

Output:

8



Friday, August 18, 2023

Using Python and BeautifulSoup to web scrape Times of India news headlines

According to Wikipedia, "web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites." In this post, we will create a small program in Python to scrape top headlines from Times of India's news headlines page using the BeautifulSoup library.

Sample webpage showing the top news headlines
Top news headlines

Particularly, our program will fetch the Times of India Headlines page and extract the prime news headlines on the top of the page. As of this writing, the page displays 6 headlines in that section which we want to scrape. In this screenshot of the webpage, our point of interest is the highlighted section which contains the top 6 headlines.

The programming language we will use is Python 3. Along with that, we will also use the BeautifulSoup 4 package for parsing the HTML. I will assume that you already have a system on which these prerequisites are installed and ready to run. I will also assume that you have a Python editor and compiler to compile and run the program. For the purpose of this illustration, I will use Google Colab to write and execute the python code.

Part 1: Scrape the website

To start with, we will write a simple code that fetches the data and outputs the scraped text in the editors output window. Type the following code in your Python editor. I will explain the code later in this post. A copy of this code is available at my TOITopHeadlines v1.0 repository on Github.


# This program scrapes a web page from Times of India to extract
# top headlines and prints it in the output window.

import requests
from bs4 import BeautifulSoup

def toi_topheadlines():
  url = "https://timesofindia.indiatimes.com/home/headlines"
  page_request = requests.get(url)
  page_content = page_request.content
  soup = BeautifulSoup(page_content,"html.parser")

  count = 1

  for divtag_c02 in soup.find_all('div', {'id': 'c_02'}):
    for divtag_0201 in divtag_c02.find_all('div', {'id': 'c_0201'}):
      divtag_hwdt1 = divtag_0201.find('div', {'id': 'c_headlines_wdt_1'})
      for divtag_topnl in divtag_hwdt1.find_all('div',
       {'class': 'top-newslist'}):
        for ultag in divtag_topnl.find_all('ul',{'class': 'clearfix'}):
          for litag in ultag.find_all('li'):
            for spantitle in litag.find_all('span', {'class': 'w_tle'}):
              href = spantitle.find('a')['href']
              if href.find("/", 0) == 0:
                href = "https://timesofindia.indiatimes.com" + href
                print(str(count) + ". " + spantitle.find('a')['title'] +
                      " - " + href)
                count = count + 1

if __name__ == "__main__":
  toi_topheadlines()

print("\n" + "end")

Executing the code will make it extract the HTML from the URL, parse out the required data and output the list of news headline titles and respective URLs as highlighted in the screenshot below:

News headlines scraped using Python

If you have managed to get that working, congratulations. You have scraped the top headlines and now you can use it in your own creative ways. Next, we will delve into what we did and what got us here.

Now, take a look at the portion of the source code that goes through a chain of for loops to crawl into the HTML tags. This exactly corresponds to the way that the markups are structured in the web page. You can take a look at the HTML markups by going to the browser's Developer Tools and inspecting the code behind the UI elements.

Inspecting HTML tag structure

Your program has to be tuned according to the HTML markup structure of the page that you are trying to scrape.

Part 2: Write the scraped data to a file in Google Drive

Now that our program can successfully scrape the data, in this section, we will take a step forward and write the scraped data into a JSON file in Google Drive. We will continue to use Google Colab to run the program.

For this we will mount a root folder in Google Drive and create a folder to store our files. We use a list of dictionaries and then use Python's JSON library to write the list to a JSON file. A copy of this code is available at my TOITopHeadlines v2.0 repository on Github.


# This program scrapes a web page from Times of India to extract
# top headlines and writes it to a JSON file in Google Drive.

import requests
import datetime
import json
from bs4 import BeautifulSoup

# Prepare file location
import os
from google.colab import drive
strDriveMountLoc = '/content/drive'
strDriveTargetLoc = "/content/drive/My Drive/WebScrape/DataNewsScrapeTOI"
# Mount Google Drive
drive.mount(strDriveMountLoc)
# Create a folder in the root directory
!mkdir -p "/content/drive/My Drive/WebScrape/DataNewsScrapeTOI"

def toi_topheadlines():
  # Generate output filename based on the date and time
  dt = datetime.datetime.now()
  filename = "toi_topheadlines" + dt.strftime("%Y%m%d%H%M%S") + ".json"

  url = "https://timesofindia.indiatimes.com/home/headlines"
  page_request = requests.get(url)
  page_content = page_request.content
  soup = BeautifulSoup(page_content,"html.parser")

  count = 1
  txtscraped = ""
  headlines = []

  for divtag_c02 in soup.find_all('div', {'id': 'c_02'}):
    for divtag_0201 in divtag_c02.find_all('div', {'id': 'c_0201'}):
      divtag_hwdt1 = divtag_0201.find('div', {'id': 'c_headlines_wdt_1'})
      for divtag_topnl in divtag_hwdt1.find_all('div',
       {'class': 'top-newslist'}):
        for ultag in divtag_topnl.find_all('ul',{'class': 'clearfix'}):
          for litag in ultag.find_all('li'):
            for spantitle in litag.find_all('span', {'class': 'w_tle'}):
              href = spantitle.find('a')['href']
              if href.find("/", 0) == 0:
                href = "https://timesofindia.indiatimes.com" + href
                print(str(count) + ". " + spantitle.find('a')['title'] +
                      " - " + href)
                thisheadline = {
                    "sn": count,
                    "title": spantitle.find('a')['title'],
                    "href": href
                }
                headlines.append(thisheadline)

                count = count + 1

  with open(strDriveTargetLoc + '/' + filename, "a") as f:
    f.write(json.dumps(headlines, indent=2))

if __name__ == "__main__":
  toi_topheadlines()

print("\n" + "end")

Executing the code in Google Colab will display a prompt to connect to Google Drive and then take you through a series of pages to authenticate using your Google id. Once you are past the authentication the code should execute and create a JSON file in the folder path that you chose in the program. Below you can see how a list of files would look like.

JSON files in Google Drive

The content of the JSON file would look similar to what you see below.

The JSON output

In Conclusion

A word of caution on using the web scraping method is that while many websites don't mind, there are many who don't like it. It is best to go through their terms of service to understand the limitations they apply to what you can do with the data and ensure that you are not in violation. Another important point to remember is that many websites periodically change their look and feel hence modifying the structure of the HTML. On the face of such changes, your web scraping logic may fall flat, hence web scrapers need continuous maintenance. A better way to capture and harvest such data is to use APIs published by the web sites. This demonstration is for academic purposes only.

Happy scraping!

Tuesday, September 27, 2022

Is it time to retire C and C++ for Rust in new programs?

Mark Russinovich, Microsoft Azure's CTO, tweeted that "it's time to halt starting any new projects in C/C++ and use Rust for those scenarios where a non-GC language is required. For the sake of security and reliability. The industry should declare those languages as deprecated."

...

Both languages are "memory-unsafe." They give developers fine-grained control of their application's memory, but with great power comes great potential for trouble. One memory snowball slip-up can lead to an avalanche of errors.

...

Rust, on the other hand, is a memory-safe language.

Friday, August 14, 2020

Querying Aspen InfoPlus.21 (IP.21) data from SQLplus Web Service using C#.Net

Image showing part of the C#.Net source code
We will write a console-based SOAP client application using C#.Net to query data from Aspen InfoPlus.21 (IP.21) using the SQLplus Web Service that is hosted on your IP.21 server. Though there are other ways to achieve this, using the SQLplus Web Service allows platform independence and doesn't need any Aspen components to be installed on the client system.

Prerequisites: Ensure that your implementation of Aspen InfoPlus.21 / SQLplus Server is running the SQLplus Web Service. If it is not running, you may need to install the feature using the AspenTech installation setup disks. This code should run with any .Net Framework above version 4 and IP.21 above version 7.1. To verify that the SQLplus Web Service is available, run a simple consult by going to the following URL on your web browser. 

http://[YOUR_SERVER]/SQLPlusWebService/SQLplusWebService.asmx

Below is the C# source code that demonstrates connecting to the SQLplus Web Service, querying the database, formatting the received XML response and writing the formatted XML to a disk file.

/*

Program: Query data from AspenTech InfoPlus.21 using the SQLplus Web
         Service.

Language: C#

*/

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net;
using System.Reflection;
using System.Text;
using System.Threading.Tasks;
using System.Web;
using System.Xml;
using System.Xml.Linq;

namespace QueryAspenIP21
{
    class Program
    {
        static void Main(string[] args)
        {
            // Replace [HOST_NAME] below with the details of your
            // IP.21 server
            const string webSvc = "http://[HOST_NAME]"
                + "/SQLPlusWebService/SQLplusWebService.asmx";
           
            const string soap12Req =
                "<?xml version=\"1.0\" encoding=\"utf-8\"?>"
                + "<soap12:Envelope xmlns:xsi="
                + "\"http://www.w3.org/2001/XMLSchema-instance\" "
                + "xmlns:xsd="
                + "\"http://www.w3.org/2001/XMLSchema\" xmlns:soap12="
                + "\"http://www.w3.org/2003/05/soap-envelope\">"
                + "<soap12:Body>"
                + "<ExecuteSQL xmlns="
                + "\"http://www.aspentech.com/SQLplus.WebService\">"
                + "<command>{0}</command>"
                + "</ExecuteSQL>"
                + "</soap12:Body>"
                + "</soap12:Envelope>";
           
            // Build the SQL query string
            const string sqlCmd = "SELECT NAME, IP_DESCRIPTION, "
                + "IP_INPUT_VALUE, IP_INPUT_TIME FROM IP_AnalogDef "
                + "WHERE NAME = 'ATCAI'";
           
            HttpWebRequest webReq = (HttpWebRequest)WebRequest.Create(
                webSvc);
           
            // Set credentials if needed
            webReq.Credentials = CredentialCache.DefaultCredentials;
            webReq.ContentType = "application/soap+xml; charset=utf-8";
            webReq.Method = "POST";

            XmlDocument soapEnvDoc;
            soapEnvDoc = new XmlDocument();
            soapEnvDoc.LoadXml(string.Format(soap12Req, sqlCmd));

            byte[] bytes;
            bytes = Encoding.UTF8.GetBytes(soapEnvDoc.OuterXml);
            webReq.ContentLength = bytes.Length;
            using (Stream stream = webReq.GetRequestStream())
            {
                stream.Write(bytes, 0, bytes.Length);
            }

            HttpWebResponse webRes = (HttpWebResponse)webReq.
                GetResponse();
            Stream dataStream = webRes.GetResponseStream();
            StreamReader reader = new StreamReader(dataStream);

            XmlDocument soapResXml = new XmlDocument();
            soapResXml.Load(reader);

            // Decode encoded values in the XML string
            soapResXml.InnerXml = HttpUtility.HtmlDecode(
                soapResXml.InnerXml);
           
            // Use LINQ for format XML for print
            XDocument beautifulXml = XDocument.Parse(soapResXml.InnerXml);
            soapResXml.InnerXml = beautifulXml.ToString();

            // Build the file path to write
            string filePath = Path.GetDirectoryName(
                Assembly.GetExecutingAssembly().Location) +
                "\\response.xml";

            using(StreamWriter stream = new StreamWriter(filePath, false,
                Encoding.GetEncoding("iso-8859-7")))
            {
                soapResXml.Save(stream);
            }

            // Clean up
            reader.Close();
            dataStream.Close();
            webRes.Close();
        }
    }
}

Once the code has executed successfully, the XML response should be written to an XML file at the same location as the executable. This is a sample of the response XML with the data.

<?xml version="1.0" encoding="ISO-8859-7"?>
<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <soap:Body>
    <ExecuteSQLResponse
    xmlns="http://www.aspentech.com/SQLplus.WebService/">
      <ExecuteSQLResult>
        <NewDataSet>
          <Table>
            <NAME>ATCAI</NAME>
            <IP_DESCRIPTION>Sine Input</IP_DESCRIPTION>
            <IP_INPUT_VALUE>9.3396207809448242</IP_INPUT_VALUE>
            <IP_INPUT_TIME>06-AUG-2020 01:35:30.3</IP_INPUT_TIME>
          </Table>
        </NewDataSet>
      </ExecuteSQLResult>
    </ExecuteSQLResponse>
  </soap:Body>
</soap:Envelope>

I hope this helps someone. I appreciate any comments or suggestions. The code is also available here in GitHub.