Friday, September 8, 2023

SuperWriter article in Wikipedia

A few days ago I created a draft article here in Wikipedia for a word processor that was called SuperWriter which was developed by Sorcim Corporation (later acquired by Computer Associates in 1985). Along with that an image with the application screenshot has also been uploaded at this location.

The submission was rejected for publication because the provided references do not show that the subject qualifies for a Wikipedia article. So I will keep the source of the article here for now:


{{Short description|SuperWriter word processor for Apricot Portable}}

{{Draft topics|literature|media|computing}}

{{AfC topic|other}}


{{Infobox Software

|name                   = SuperWriter

|logo                   = 

|screenshot             = SuperWriter1.03Screenshot.png

|caption                = SuperWriter 1.03 running on MS-DOS

|developer              = [[Sorcim]]

|author             =

|released     = 

|latest_release_version = 

|latest_release_date    = 

|latest_preview_version = 

|latest_preview_date    = 

|operating_system       = [[MS-DOS 2.11]], [[Concurrent CP/M]], [[CP/M-86]]

|genre                  = [[Word processor|Word Processor]]

|license                = [[Proprietary software|Proprietary]]

|website                = 

|discontinued           = yes

}}

'''SuperWriter''' was a [[word processor]] program that came bundled with devices from [[Apricot Computers]], like the [[Apricot Portable]].<ref>{{cite web |title=Apricot Portable |url=https://www.old-computers.com/museum/computer.asp?c=492 |archiveurl=https://web.archive.org/web/20220803190552/https://www.old-computers.com/museum/computer.asp?c=492 |archivedate=2022-08-03 |url-status=live |accessdate=2023-09-05 |website=Old-Computers.com }}</ref> The application was originally written by [[Sorcim]] which was later acquired by [[Computer Associates]] in 1985.<ref>{{cite magazine |last=Needle |first=David |author=<!--Staff writer(s); no by-line.--> |title=Computer Associates buys Sorcim |url=https://books.google.com/books?id=wS4EAAAAMBAJ&pg=PA11 |page=11 |magazine=[[InfoWorld]] |location= |publisher=[[Popular Computing Inc.]] |date=1984-06-25 |access-date=2023-09-05 |volume=6 |issue=26 }}</ref> It featured a spelling checker and various types of text justification.<ref>{{cite magazine |author=<!--Staff writer(s); no by-line.--> |title=InfoWorld Reviews |url=https://books.google.com/books?id=wS4EAAAAMBAJ&pg=PA69 |page=69 |magazine=[[InfoWorld]] |location= |publisher=[[Popular Computing Inc.]] |date=1984-06-25 |access-date=2023-09-05 |volume=6 |issue=26 }}</ref> SuperWriter was a potential competitor to [[Information Unlimited Software|IUS]]'s [[EasyWriter|EasyWriter II]] program for the [[IBM Personal Computer|IBM PC]] - which was also acquired by Computer Associates a few months before it acquired Sorcim.


==Features==

SuperWriter's user interface was a combination of menu-driven and command-driven experience.<ref>{{cite magazine |last= |first= |author=<!--Staff writer(s); no by-line.--> |title=Word Processing: The Latest Word |url=https://books.google.com/books?id=wtckANahGXIC&pg=PA120 |page=120 |magazine=[[PC Magazine]] |location= |publisher=[[Ziff Davis|Ziff Davis Publishing Company]] |date=1985-08-20 |access-date=2023-09-06 |volume=4 |issue=17 }}</ref>


Apart from featuring the [[SpellGuard]] spelling checker, SuperWriter also let the user preview the document on screen before sending it to print. Among other formatting options, the application let the user define the length and width of a page, indentation length, header and trailer text and type of text justification. It let the user print the full document or only a part of the document. SuperWriter let the user toggle the case of the letters from lower to upper and vice versa, create tables, find and replace words within a document.


The mail-merge feature in SuperWriter version 1.03 was one of the most powerful around the time but lacked adequate documentation in its user manual. This caused a frustrating experience in getting started with the feature<ref>.{{cite magazine |last= |first= |author=<!--Staff writer(s); no by-line.--> |title=Power Performance: Nine Programmable Mail-mergers |url=https://books.google.com/books?id=lmtbry-ytXgC&pg=PA181 |page=181 |magazine=[[PC Magazine]] |location= |publisher=[[Ziff Davis|Ziff Davis Publishing Company]] |date=1986-04-29 |access-date=2023-09-05 |volume=5 |issue=8 }}</ref>


==Reception==

SuperWriter never sold in substantial quantities and was limited by its ability to edit only what it could hold in memory. The InfoWorld magazine rated it excellent in error handling.<ref>{{cite magazine |last=Satchell |first=Stephen |author=<!--Staff writer(s); no by-line.--> |title=Review: SuperWriter |url=https://books.google.com/books?id=gy4EAAAAMBAJ&pg=PA51 |page=51 |magazine=[[InfoWorld]] |location= |publisher=[[Popular Computing Inc.]] |date=1984-02-27 |access-date=2023-09-05 |volume=6 |issue=9 }}</ref>


==Pricing==

In the year 1984, the listed price was $295 and was available for the [[IBM Personal Computer|IBM PC]] or [[Compaq]] computer running MS-DOS.


== References ==

<!-- Inline citations added to your article will automatically display here. See en.wikipedia.org/wiki/WP:REFB for instructions on how to add citations. -->

{{reflist}}




Friday, September 1, 2023

Microsoft WordPad to be deprecated from Windows

In a new feature note, Microsoft announced that WordPad is being deprecated and would be removed from future releases of Windows.

WordPad 10 running on Windows 11
WordPad, initially released in 1995, is a basic word processor that has been included with almost all versions of Microsoft Windows from Windows 95 onwards. It was the replacement of Microsoft Write, which was the de facto word processor in Windows versions prior to Windows 95.

The application was built in C++ and used MFC (Microsoft Foundation Classes). An early archived version of the source code is available here in GitHub.

Microsoft's note, which is available here, stated, "WordPad is no longer being updated and will be removed in a future release of Windows. We recommend Microsoft Word for rich text documents like .doc and .rtf and Windows Notepad for plain text documents like .txt."

Monday, August 28, 2023

Today I learned about Marion Stokes' TV archive

Today I learned about Marion Stokes, a woman from Philadelphia, Pennsylvania, who recorded hundreds of thousands of hours of television news footage spanning 35 years, from 1977 until her death in 2012. At her death, she had grown a collection of about 71,000 VHS and Betamax tapes. Her archivist and compulsive hoarding tendency also resulted in a collection of 30,000 to 40,000 books.

In 2019, a documentary film was made about her, and her archives named Recorder: The Marion Stokes Project.

Her collection was archived by the Internet Archive which is available here.

Wednesday, August 23, 2023

Chandrayaan-3 landed on the moon

Today, August 23, 2023, ISRO's Chandrayaan-3 lunar exploration mission successfully landed on the moon at 6:04 PM IST (5:34 AM PT). The mission was launched on a Bahubali LVM3-M4 rocket carrying a lander named Vikram and a rover named Pragyan on July 14 this year which I had logged here.

With this landing, India has become the fourth nation to make a soft landing on the moon, after the former Soviet Union, the U.S. and China. It has also become the first country to land on the lunar south pole, which is still an unexplored area.

ISRO's Tweet on the successful landing:

Below is the saved live stream of the landing on YouTube. The final phase of the descent starts at 35:13.

ISRO tweeted the first images received from the lander:

Earlier on November 10, 2009, with the Chandrayaan-1 mission, ISRO made a deliberate crash landing of its Moon Impact Probe (MIP) which helped discover water molecules on the moon.

Saturday, August 19, 2023

Today I Learned: All coal was created around the same time

There is an ongoing debate on whether almost all coal was made around the same time, during a brief period 360 to 300 million years ago.

Photo by Nikolay Kovalenko on Unsplash
We are using the coal that was created during this period and almost no new coal has formed since then.

Steve Mould provides an easy-to-understand explanation in this video on YouTube.

The Elk Valley Coal News has a detailed article on this here.



ChatGPT Prompts: A barebones list

With new advances in artificial intelligence and its subsets machine learning and deep learning, there are many applications based on large language models that generates human-like text. The common thing in all these applications is the need of prompts so they can come up with appropriate responses. This post contains a barebones list of prompts for ChatGPT that can be used to boost productivity.

Counter reply: Write a witty counter reply to the following statement: [insert statement here]

Meeting agenda: Draft an effective meeting agenda given [insert your points here].

Email communication: Draft a polite yet effective email response for [insert your situation].

Elevator pitch: Create an elevator pitch for [insert product or idea].

Social media captions: Create an engaging caption for this: [describe social media content].

Simplify explanations: Explain the following in simple terms: [insert text here].

Really simple explanations: ELI5: [insert text here].

Project timelines: Based on the following milestones, create a project timeline: [insert milestones].

Marketing slogans: Create a catchy marketing slogan for [insert product description].

Professional online summary: Write a professional summary for my LinkedIn profile, given these skills and experiences [insert information].

Business language: Translate this text into professional business language: [insert text]

Business strategy: Generate a business strategy for a startup in the [insert industry] sector.

Product description: Write a compelling product description for [insert product].

Efficient to-do lists: Prioritize and organize these tasks into a to-do list [insert tasks].

Optimize content for SEO: Improve the SEO of this blog post: [insert blog post].

Creative advertisements. Generate a creative advertisement concept for promoting [insert product description].

Persuasive sales copy: Generate a persuasive sales copy for [insert product description].

Optimized workflow: Create an optimized workflow based on these tasks [insert tasks].

Innovation strategies: Suggest strategies for driving innovation in the [insert industry] sector.

Team motivation: Draft a motivational message to my team about [insert topic].

Team building activities: Suggest some creative team building activities for a remote team.

Impactful presentations: Design a powerful slide deck for a presentation on [insert topic].

Fundraising proposals: Create a compelling fundraising proposal for [insert cause].

Project proposals: Draft a comprehensive project proposal for [insert project description].

Recruitment ads: Compose a job ad for the position of [insert job title].

General interview questions: Provide a professional and thoughtful response to this interview question: [insert question].

Crisis management plans. Create a crisis management plan for [insert crisis].

Miscellaneous ChatGPT Prompts


What are 5 creative things I could do with my kids' art? I don't want to throw them away, but it's also so much clutter.


Friday, August 18, 2023

Using Python and BeautifulSoup to web scrape Times of India news headlines

According to Wikipedia, "web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites." In this post, we will create a small program in Python to scrape top headlines from Times of India's news headlines page using the BeautifulSoup library.

Sample webpage showing the top news headlines
Top news headlines

Particularly, our program will fetch the Times of India Headlines page and extract the prime news headlines on the top of the page. As of this writing, the page displays 6 headlines in that section which we want to scrape. In this screenshot of the webpage, our point of interest is the highlighted section which contains the top 6 headlines.

The programming language we will use is Python 3. Along with that, we will also use the BeautifulSoup 4 package for parsing the HTML. I will assume that you already have a system on which these prerequisites are installed and ready to run. I will also assume that you have a Python editor and compiler to compile and run the program. For the purpose of this illustration, I will use Google Colab to write and execute the python code.

Part 1: Scrape the website

To start with, we will write a simple code that fetches the data and outputs the scraped text in the editors output window. Type the following code in your Python editor. I will explain the code later in this post. A copy of this code is available at my TOITopHeadlines v1.0 repository on Github.


# This program scrapes a web page from Times of India to extract
# top headlines and prints it in the output window.

import requests
from bs4 import BeautifulSoup

def toi_topheadlines():
  url = "https://timesofindia.indiatimes.com/home/headlines"
  page_request = requests.get(url)
  page_content = page_request.content
  soup = BeautifulSoup(page_content,"html.parser")

  count = 1

  for divtag_c02 in soup.find_all('div', {'id': 'c_02'}):
    for divtag_0201 in divtag_c02.find_all('div', {'id': 'c_0201'}):
      divtag_hwdt1 = divtag_0201.find('div', {'id': 'c_headlines_wdt_1'})
      for divtag_topnl in divtag_hwdt1.find_all('div',
       {'class': 'top-newslist'}):
        for ultag in divtag_topnl.find_all('ul',{'class': 'clearfix'}):
          for litag in ultag.find_all('li'):
            for spantitle in litag.find_all('span', {'class': 'w_tle'}):
              href = spantitle.find('a')['href']
              if href.find("/", 0) == 0:
                href = "https://timesofindia.indiatimes.com" + href
                print(str(count) + ". " + spantitle.find('a')['title'] +
                      " - " + href)
                count = count + 1

if __name__ == "__main__":
  toi_topheadlines()

print("\n" + "end")

Executing the code will make it extract the HTML from the URL, parse out the required data and output the list of news headline titles and respective URLs as highlighted in the screenshot below:

News headlines scraped using Python

If you have managed to get that working, congratulations. You have scraped the top headlines and now you can use it in your own creative ways. Next, we will delve into what we did and what got us here.

Now, take a look at the portion of the source code that goes through a chain of for loops to crawl into the HTML tags. This exactly corresponds to the way that the markups are structured in the web page. You can take a look at the HTML markups by going to the browser's Developer Tools and inspecting the code behind the UI elements.

Inspecting HTML tag structure

Your program has to be tuned according to the HTML markup structure of the page that you are trying to scrape.

Part 2: Write the scraped data to a file in Google Drive

Now that our program can successfully scrape the data, in this section, we will take a step forward and write the scraped data into a JSON file in Google Drive. We will continue to use Google Colab to run the program.

For this we will mount a root folder in Google Drive and create a folder to store our files. We use a list of dictionaries and then use Python's JSON library to write the list to a JSON file. A copy of this code is available at my TOITopHeadlines v2.0 repository on Github.


# This program scrapes a web page from Times of India to extract
# top headlines and writes it to a JSON file in Google Drive.

import requests
import datetime
import json
from bs4 import BeautifulSoup

# Prepare file location
import os
from google.colab import drive
strDriveMountLoc = '/content/drive'
strDriveTargetLoc = "/content/drive/My Drive/WebScrape/DataNewsScrapeTOI"
# Mount Google Drive
drive.mount(strDriveMountLoc)
# Create a folder in the root directory
!mkdir -p "/content/drive/My Drive/WebScrape/DataNewsScrapeTOI"

def toi_topheadlines():
  # Generate output filename based on the date and time
  dt = datetime.datetime.now()
  filename = "toi_topheadlines" + dt.strftime("%Y%m%d%H%M%S") + ".json"

  url = "https://timesofindia.indiatimes.com/home/headlines"
  page_request = requests.get(url)
  page_content = page_request.content
  soup = BeautifulSoup(page_content,"html.parser")

  count = 1
  txtscraped = ""
  headlines = []

  for divtag_c02 in soup.find_all('div', {'id': 'c_02'}):
    for divtag_0201 in divtag_c02.find_all('div', {'id': 'c_0201'}):
      divtag_hwdt1 = divtag_0201.find('div', {'id': 'c_headlines_wdt_1'})
      for divtag_topnl in divtag_hwdt1.find_all('div',
       {'class': 'top-newslist'}):
        for ultag in divtag_topnl.find_all('ul',{'class': 'clearfix'}):
          for litag in ultag.find_all('li'):
            for spantitle in litag.find_all('span', {'class': 'w_tle'}):
              href = spantitle.find('a')['href']
              if href.find("/", 0) == 0:
                href = "https://timesofindia.indiatimes.com" + href
                print(str(count) + ". " + spantitle.find('a')['title'] +
                      " - " + href)
                thisheadline = {
                    "sn": count,
                    "title": spantitle.find('a')['title'],
                    "href": href
                }
                headlines.append(thisheadline)

                count = count + 1

  with open(strDriveTargetLoc + '/' + filename, "a") as f:
    f.write(json.dumps(headlines, indent=2))

if __name__ == "__main__":
  toi_topheadlines()

print("\n" + "end")

Executing the code in Google Colab will display a prompt to connect to Google Drive and then take you through a series of pages to authenticate using your Google id. Once you are past the authentication the code should execute and create a JSON file in the folder path that you chose in the program. Below you can see how a list of files would look like.

JSON files in Google Drive

The content of the JSON file would look similar to what you see below.

The JSON output

In Conclusion

A word of caution on using the web scraping method is that while many websites don't mind, there are many who don't like it. It is best to go through their terms of service to understand the limitations they apply to what you can do with the data and ensure that you are not in violation. Another important point to remember is that many websites periodically change their look and feel hence modifying the structure of the HTML. On the face of such changes, your web scraping logic may fall flat, hence web scrapers need continuous maintenance. A better way to capture and harvest such data is to use APIs published by the web sites. This demonstration is for academic purposes only.

Happy scraping!