Edward Hopper and Norman Rockwell

Edward Hopper and Norman Rockwell created through their paintings a very special kind of reality associated with the american scene and are excellent examples to discuss reality and visual arts.

I did a post/blog about about Edward Hopper where I discussed extensively his art and you can find pratically everything he painted. When his wife Jo Hopper died, she bequeathed all of Edward’s work, as well as all her own, to the Whitney Museum – over 3,000 pieces in total what makes him a very special case because his wife donated almost all of his pintings to a single institution.

I didn’t do a post/blog on Norman Rockwell, but I discussed extensively in some posts the relation of the image he depicted about the american scene, specially during the WWII and the Golden Age or the Post War era.

From both of these sources I will separate contexts that allow us to better understand how the point of view converts reality and the image that is created of it.

Edward Hopper

The reality that he was inserted in and probably the one we would perceive can be understood through these two videos:

In the first video, I call special attention to the real place where he took his image to create perhaps his most famous painting: Night Hawks. To that please add How Edward Hopper Storyboarded ‘Nighthawks’. Since storyboard is a cinema thing, let’s hear from a filmmaker (Wim Wenders) how he perceives Edward Hopper in: TWO OR THREE THINGS I KNOW ABOUT EDWARD HOPPER. In this post, I call special attention to the surprise that Wim Wenders had upon discovering Edward Hopper and his explanation of why Hopper is a model of how one can see the world.

In the second video, Mrs. Jennifer Tipton, among other very interesting considerations, says something that perhaps define Edward Hopper: “He uses light in an inexpressible way – in a way that makes you feel something that is very difficult to articulate.”

From Hopper’s point of view and to make it easier for the reader you can examine the following subjects from which Hopper is famous which are a testimony to my quest about the image and reality:  

Edward Hopper and the American Hotel

Edward Hopper Women It should better be written woman… From 1924 to his death in 1967, Hopper painted women who were shadow-faced, round-contoured ciphers. In the world of his imagination, they stayed up all night, poring over cups of coffee, lost in thought at the movies, undressing next to a radiator or lingering in the office with their boss, permanently stuck in some noir urban peepshow. Hopper’s women never age, but his wife, their only model, was not so immune. After 25 years of marriage, she bemoaned ‘time passing, passing, drop by drop of one’s life blood – hair greying, fashions changing, an entirely new slant on art rampant and 25 years of my life gone’.

Edward Hopper and American Solitude

Edward Hopper and Modern Life

Norman Rockwell

I know that academia shuns and despises realism in favor of modern abstract art and for some very good reasons which me included accept but, to be educated and put it blandly, while realism remains an important and respected style within the broader art world, the preference for modern and abstract art in academic and critical circles is influenced by a combination of historical shifts, philosophical trends, institutional support, and market dynamics. This does not mean that realism is disregarded entirely, but rather that abstract and modern art have been more prominently aligned with the values and interests of the contemporary art establishment.

It is obvious that Rockwell was almost completely unaffected by the revolutionary events in painting that occurred during his lifetime. If we are to compare it with Edward Hopper (1882 – 1967), we will feel that where Hopper, who also represented American life in a realistic way, albeit with some degree of abstraction, expressed coldness, alienation, separation, and uncertainty, Rockwell showed joy, sociability, and warmth. Most 20th century artists discovered the need to distance themselves from society, especially in the abstraction of imagery, creating worlds that exist only inside people’s heads. Especially people disturbed by the direction civilization has taken. Rockwell didn’t, he placed himself at the center of average American values, allowing himself only a few humorous digs here and there and into the simplicity of the naivety of the young or the conservatism of the older generation. He was, therefore, much more of an insider to his large audience as he was excluded from the avant-garde of American artists (or any other nationality).

While Rockwell’s early works often focused on idyllic and idealized scenes of American life, his later works shifted towards more serious and socially conscious themes. This change was partly influenced by his departure from The Saturday Evening Post and his subsequent association with Look magazine, which allowed him more freedom to explore social issues.

Norman Rockwell’s concern for social justice bedame evident in his powerful and evocative paintings addressing civil rights and racial equality. His later works reflect a deep commitment to highlighting the struggles and injustices faced by marginalized communities, marking a significant shift from his earlier, more nostalgic depictions of American life.

We can understand all this well when we see that The Scream of the Norwegian Edvard Munch was recently sold for 120 million dollars while Norman Rockwell highest price for one of his paintings reached fairly 50 million dollars.

Norman Rockwell’s paintings addressing civil rights and racial equality

“The Problem We All Live With” (1964)

  • Description: This painting depicts Ruby Bridges, a six-year-old African American girl, being escorted by U.S. Marshals to an all-white school in New Orleans, highlighting the issue of school desegregation.
  • Impact: The painting is considered one of Rockwell’s most poignant works on civil rights and remains an iconic image of the struggle for racial equality in America.

“Southern Justice” (Murder in Mississippi) (1965)

  • Description: This painting portrays the brutal murder of three civil rights workers—James Chaney, Andrew Goodman, and Michael Schwerner—by members of the Ku Klux Klan in 1964.
  • Impact: It serves as a stark reminder of the violent resistance to civil rights efforts and Rockwell’s commitment to addressing these critical issues.

“New Kids in the Neighborhood” (1967)

  • Description: This painting shows two African American children moving into a predominantly white neighborhood, capturing a moment of integration and the social tensions surrounding it.
  • Moving Day depicts the integration of Chicago’s Park Forest suburban community. The children examine each other with curiosity and it appears likely that they will soon be friends. However, the face appearing from behind a window curtain make us wonder how the adults will react.
  • Impact: The work reflects Rockwell’s sensitivity to the everyday realities of desegregation and racial integration.

Norman Rockwell goes beyond that.

The Four Freedoms.

President Franklin D. Roosevelt’s “Four Freedoms” speech, delivered on January 6, 1941, was a call to action for the United States and other democracies to work together to defend four essential freedoms around the world. These four freedoms were freedom of speech, freedom of worship, freedom of will and freedom of fear.

President Roosevelt argued that these freedoms were necessary for people to live in peace and security, and that they were under threat from aggression from fascist powers in Europe and Asia. He called on the United States to take a leadership role in defending these freedoms and supporting democracy around the world.

No one better than Norman Rockwell to express the image of what these “freedoms” would be.

Which were the materialization in the American imagination of what President Franklin D. Roosevelt proposed on January 6, 1941, which went down in history as the 1941 State of the Union address and which should “contemplate all the people of the world”.

And what was happening to “all the people in the world”?

Before checking this, let’s look at the US domestic situation before involvement in World War II.

The State of the Union address before Congress was largely about the national security of the United States and the threat to other democracies from World War I that was raging on every continent in the Eastern Hemisphere. In the speech, he broke with the United States tradition of non-interventionism that had long existed in the United States. He highlighted the US role in helping allies already engaged in war
In this context, he summarized the values ​​of democracy that were behind the bipartisan consensus on international engagement that existed at the time. The famous quote from the preface of this speech tells what these values ​​are: “In the same way that men do not live by bread alone, they do not fight by weapon alone.” In the second half of the speech, he lists the benefits of democracy, which include opportunities economic, employment, social security, and the promise of “adequate health care.” The first two freedoms of speech and religion are protected by the First Amendment of the United States Constitution. Roosevelt’s inclusion of the latter two freedoms went beyond the traditional constitutional values ​​protected by the U.S. Bill of Rights. Roosevelt endorsed a broader human right to economic security and anticipated what would come to be known decades later as the “human security” paradigm in social science and economic development. It also included “freedom from fear” against national aggression before the idea of ​​a United Nations for this protection was conceived or discussed by world leaders and allied nations.

Historical Context of the Four Freedoms Speech

With the end of the First World War (1914-1918), the United States adopted a policy of isolationism and non-interventionism, having refused to approve the Treaty of Versailles (1919) or formally enter the League of Nations. Many Americans remembering the horrors of World War I and, believing that their involvement in World War I had been a mistake, were adamantly against continued intervention in European affairs. With the Neutrality Act established after 1935, US law prohibited the sale of weapons to countries that were at war and placed restrictions on travel with belligerent vessels.
When World War II began in 1939 with Germany’s invasion of Poland, the United States was still committed to its noninterventionist ideals. Although Roosevelt, and a large segment of the population, supported the Allied cause, neutrality laws and a strong isolationist element in Congress ensured that no substantial support could be given. With the revision of the Neutrality Act in 1939, Roosevelt adopted a “methods-short-of-war policy”, in which supplies and armaments could be provided to European allies, as long as there was no declaration of war and no troops were sent. In December 1940, Europe was largely at the mercy of Adolf Hitler and Germany’s Nazi regime. With the defeat of France by Germany in June 1940, Great Britain was practically alone against the military alliance of Germany, Italy and Japan. Winston Churchill, as Prime Minister of Great Britain, asked Roosevelt and the United States to provide them with weapons in order to continue their war effort.

What was the actual reality and what changes which ocurred to that reality?

Before particularize and focus in the contribution of Norman Rockwell art and the kind of reality it foresaw, let’s recall and understand what happened: (If you are american our educated in history, you can jump directly to Where Norman Rockwell art played a role):

“Methods short of war” in U.S. foreign policy during World War II refers to a range of strategies and actions employed by the United States to support Allied nations and oppose Axis powers without directly entering the conflict until the attack on Pearl Harbor in December 1941. The policies involved were:

Key Elements of “Methods Short of War”:

1. Economic Measures:

  • Lend-Lease Act (1941): This critical policy allowed the U.S. to supply military aid to Allied nations, particularly the United Kingdom and the Soviet Union, without requiring immediate payment. The act enabled the transfer of arms, ammunition, and other supplies essential for the Allied war effort.
  • Neutrality Acts: Initially, these acts aimed to prevent U.S. involvement in foreign conflicts by restricting arms sales and loans to belligerent nations. However, they were gradually modified to allow for greater support to Allies, particularly through the “cash and carry” provision that permitted belligerents to purchase arms from the U.S. as long as they paid cash and transported the goods themselves.
  • Economic Sanctions and Embargoes: The U.S. imposed economic sanctions and embargoes on Axis powers, notably Japan, to restrict their access to vital resources such as oil and steel. These measures aimed to weaken the military capabilities of Axis nations without direct military confrontation.

Sources:

2. Diplomatic Efforts:

  • Atlantic Charter (1941): A pivotal policy statement issued by President Franklin D. Roosevelt and British Prime Minister Winston Churchill that outlined the Allies’ goals for the post-war world, emphasizing self-determination, economic cooperation, and peace. The charter strengthened the U.S.-UK alliance and set the stage for broader international cooperation.
  • Good Neighbor Policy: While primarily aimed at improving relations with Latin American countries, this policy also sought to secure hemispheric solidarity against Axis influence in the Americas.

Sources:

3. Limited Military Actions:

  • Destroyers for Bases Agreement (1940): An agreement between the U.S. and the UK in which the U.S. provided 50 destroyers to Britain in exchange for leases on British bases in the Caribbean and Newfoundland. This deal bolstered British naval capabilities while enhancing U.S. strategic positioning.
  • Patrolling the Atlantic: U.S. Navy ships began patrolling the Atlantic Ocean to monitor and report Axis submarine activity, providing critical intelligence to Allied forces.
  • Support for China: The U.S. provided military aid and advisors to China to help resist Japanese aggression, reflecting a broader strategy to limit Axis expansion in the Asia-Pacific region.

Sources:

Conclusion

“Methods short of war” encapsulates the various strategies the U.S. employed to support Allied nations and undermine Axis powers while avoiding direct involvement in World War II until the Pearl Harbor attack. These methods included economic support through the Lend-Lease Act, diplomatic initiatives like the Atlantic Charter, and limited military engagements such as the Destroyers for Bases Agreement. These efforts helped shape the course of the war and laid the groundwork for the U.S.’s eventual full-scale entry into the conflict.

America goes to war

I transcribe it here from the National Museum of WW2 because eventually I will translate it to my mother language Portuguese and because it might disappear.  This is what is under or behind the transformation of the United States of America the number one nation in the world. World War II and its aftermath can indeed be seen as the “final touch” or the “last drop” that solidified the United States’ rise to global dominance. While the foundations for this rise were already in place due to the country’s economic, industrial, and cultural strengths, the war and subsequent events accelerated and cemented its position as the leading global superpower. This period marked the transition from a strong, influential nation to the preeminent world leader in various domains. Let’s take a look in more detail how it initially evolved:

December 7, 1941: A Day That Will Live in Infamy

America’s isolation from war ended on December 7, 1941, when Japan staged a surprise attack on American military installations in the Pacific. The most devastating strike came at Pearl Harbor, the Hawaiian naval base where much of the US Pacific Fleet was moored. In a two-hour attack, Japanese warplanes sank or damaged 18 warships and destroyed 164 aircraft. Over 2,400 servicemen and civilians lost their lives.

America’s Reaction

“No matter how long it may take us to overcome this premeditated invasion, the American people in their righteous might will win through to absolute victory.”
— President Franklin D. Roosevelt, December 8, 1941


Though stunned by the events of December 7, Americans were also resolute. On December 8, President Roosevelt asked Congress to declare war against Japan. The declaration passed with just one dissenting vote. Three days later, Germany and Italy, allied with Japan, declared war on the United States. America was now drawn into a global war. It had allies in this fight–most importantly Great Britain and the Soviet Union. But the job the nation faced in December 1941 was formidable.


JOINING THE MILITARY

The United States faced a mammoth job in December 1941. Ill-equipped and wounded, the nation was at war with three formidable adversaries. It had to prepare to fight on two distant and very different fronts, Europe and the Pacific.

America needed to quickly raise, train, and outfit a vast military force. At the same time, it had to find a way to provide material aid to its hard-pressed allies in Great Britain and the Soviet Union.

Meeting these challenges would require massive government spending, conversion of existing industries to wartime production, construction of huge new factories, changes in consumption, and restrictions on many aspects of American life. Government, industry, and labor would need to cooperate. Contributions from all Americans, young and old, men and women, would be necessary to build up what President Roosevelt called the “Arsenal of Democracy.”

In the months after Pearl Harbor, the nation swiftly mobilized its human and material resources for war. The opportunities and sacrifices of wartime would change America in profound, and sometimes unexpected, ways.

Recruitment

The primary task facing America in 1941 was raising and training a credible military force. Concern over the threat of war had spurred President Roosevelt and Congress to approve the nation’s first peacetime military draft in September 1940. By December 1941 America’s military had grown to nearly 2.2 million soldiers, sailors, airmen, and marines.

America’s armed forces consisted largely of “citizen soldiers”,men and women drawn from civilian life. They came from every state in the nation and all economic and social strata. Many were volunteers, but the majority,roughly 10 million,entered the military through the draft. Most draftees were assigned to the army. The other services attracted enough volunteers at first, but eventually their ranks also included draftees.

Barracks Life

Upon their arrival at the training camps, inductees were stripped of the freedom and individuality they had enjoyed as civilians. They had to adapt to an entirely new way of living, one that involved routine inspections and strict military conduct, as well as rigorous physical and combat training. They were given identical haircuts, uniforms, and equipment, and were assigned to spartan barracks that afforded no privacy and little room for personal possessions.

The Draft

By late 1942 all men aged 18 to 64 were required to register for the draft, though in practice the system concentrated on men under 38. Eventually 36 million men registered. Individuals were selected from this manpower pool for examination by one of over 6,000 local draft boards. These boards, comprised of citizens from individual communities, determined if a man was fit to enter the military. They considered factors like the importance of a man’s occupation to the war effort, his health, and his family situation. Many men volunteered rather than wait to be drafted. That way, they could choose their branch of service.

Potential servicemen reported to military induction centers to undergo physical and psychiatric examinations. If a man passed these exams, he was fingerprinted and asked which type of service he preferred, though his assignment would be based on the military’s needs. After signing his induction papers, he was issued a serial number. The final step was the administration of the oath. He was now in the military. After a short furlough, he reported to a reception center before being shipped to a training camp. New recruits faced more medical examinations, inoculations, and aptitude tests.

Training

The training camp was the forge in which civilians began to become military men and women. In the training camps new servicemen and women underwent rigorous physical conditioning. They were drilled in the basic elements of military life and trained to work as part of a team. They learned to operate and maintain weapons. They took tests to determine their talents and were taught more specialized skills. Paratroopers, antiaircraft teams, desert troops, and other unique units received additional instruction at special training centers.


THE HOME FRONT

“I need not repeat the figures. The facts speak for themselves…. These men could not have been armed and equipped as they are had it not been for the miracle of production here at home. The production which has flowed from the country to all the battlefronts of the world has been due to the efforts of American business, American labor, and American farmers, working together as a patriotic team.”
–President Franklin D. Roosevelt, Navy Day speech, October 27, 1944

Raising an armed force was just part of America’s war effort. That force had to be supplied with the uniforms, guns, tanks, ships, warplanes, and other weapons and equipment needed to fight. With its vast human and material resources, the United States had the potential to supply both itself and its allies. But first the American economy had to be converted to war production.

The war production effort brought immense changes to American life. As millions of men and women entered the service and production boomed, unemployment virtually disappeared. The need for labor opened up new opportunities for women and African Americans and other minorities. Millions of Americans left home to take jobs in war plants that sprang up around the nation. Economic output skyrocketed.

The war effort on the “Home Front” required sacrifices and cooperation. “Don’t you know there’s a war on?” was a common expression. Rationing became part of everyday life. Americans learned to conserve vital resources. They lived with price controls, dealt with shortages of everything from nylons to housing, and volunteered for jobs ranging from air raid warden to Red Cross worker.


RATIONING AND RECYCLING

“Food for Victory”
To conserve and produce more food, a “Food for Victory” campaign was launched. Eating leftovers became a patriotic duty and civilians were urged to grow their own vegetables and fruits. Millions of “Victory gardens,” planted and maintained by ordinary citizens, appeared in backyards, vacant lots, and public parks. They produced over 1 billion tons of food. Americans canned food at home and consulted “Victory cookbooks” for recipes and tips to make the most of rationed goods.

“Make It Do or Do Without”
War production created shortages of critical supplies. To overcome these shortages, war planners searched for substitutes. One key metal in limited supply was copper. It was used in many war-related products, including assault wire. The military needed millions of miles of this wire to communicate on battlefields.

To satisfy the military’s demands, copper substitutes had to be found to use in products less important to the nation’s defense. The US Mint helped solve the copper shortage. During 1943 it made pennies out of steel. The Mint also conserved nickel, another important metal, by removing it from 5-cent coins. Substitutions like these helped win the production battle.

“Do With Less, So They’ll Have More”
The military needed more than guns and ammunition to do its job. It had to be fed. The Army’s standard K ration included chocolate bars, which were produced in huge numbers. Cocoa production was increased to make this possible.

Sugar was another ingredient in chocolate. It was also used in chewing gum, another part of the K ration. Sugar cane was needed to produce gunpowder, dynamite, and other chemical products.

To satisfy the military’s needs, sugar was rationed to civilians. The government also rationed other foods, including meat and coffee. Local rationing boards issued coupons to consumers that entitled them to a limited supply of rationed items.

“Save Waste Fat for Explosives”
Ammunition for rifles, artillery, mortars, and other weapons was one of the most important manufacturing priorities of World War II. A key ingredient needed to make the explosives in much ammunition was glycerine.

To help produce more ammunition, Americans were encouraged to save household waste fat, which was used to make glycerine. Other household goods,including rags, paper, silk, and string,were also recycled. This was a home front project that all Americans could join.


SALVAGE FOR VICTORY

Canteens are a standard part of military equipment. Millions were produced during the war. Most were made of steel or aluminum, metals which were also used to make everything from ammunition to ships. At times, both metals were in short supply.

To meet America’s metal needs, scrap was salvaged from basements, backyards, and attics. Old cars, bed frames, radiators, pots, and pipes were just some of the items gathered at metal “scrap drives” around the nation. Americans also collected rubber, tin, nylon, and paper at salvage drives.

“Share Your Cars and Spare Your Tires”
America’s military needed millions of tires for jeeps, trucks, and other vehicles. Tires required rubber. Rubber was also used to produce tanks and planes. But when Japan invaded Southeast Asia, the United States was cut off from one of its chief sources of this critical raw product.

America overcame its rubber shortage in several ways. Speed limits and gas rationing forced people to limit their driving. This reduced wear and tear on tires. A synthetic rubber industry was created. The public also carpooled and contributed rubber scrap for recycling.

Dollars for Defense
To help pay for the war, the government increased corporate and personal income taxes. The federal income tax entered the lives of many Americans. In 1939 fewer than 8 million people filed individual income tax returns. In 1945 nearly 50 million filed. The withholding system of payroll deductions was another wartime development. The government also borrowed money by selling “war bonds” to the public. With consumer goods in short supply, Americans put much of their money into bonds and savings accounts.


MOBILIZING THE ECONOMY

America’s economy performed astonishing feats during World War II. Manufacturers retooled their plants to produce war goods. But this alone was not enough. Soon huge new factories, built with government and private funds, appeared around the nation. Millions of new jobs were created and millions of Americans moved to new communities to fill them. Annual economic production, as measured by the Gross National Product (GNP), more than doubled, rising from $99.7 billion in 1940 to nearly $212 billion in 1945.

Production Miracles In industry after industry Americans performed production miracles. One story helps capture the scale of the defense effort. In 1940 President Roosevelt shocked Congress when he proposed building 50,000 aircraft a year. In 1944 the nation made almost double that number. Ford’s massive Willow Run bomber factory alone produced nearly one plane an hour by March 1944.

To achieve increases like this, defense spending jumped from $1.5 billion in 1940 to $81.5 billion in 1945. By 1944 America led the world in arms production, making more than enough to fill its military needs. At the same time, the United States was providing its allies in Great Britain and the Soviet Union with critically needed supplies.

Civilian Defense
Many Americans volunteered to defend the nation from enemy bombing or invasion. They trained in first aid, aircraft spotting, bomb removal, and fire fighting. Air raid wardens led practice drills, including blackouts. By mid-1942 over 10 million Americans were civil defense volunteers.

Though America’s mainland was never invaded, there were dangers offshore. Several Japanese submarines were spotted near the Pacific coast, and German U-boats patrolled the Atlantic coast, the Gulf of Mexico, and the Caribbean Sea. At least 10 US naval vessels were sunk or damaged by U-boats operating in American waters.

A Workforce Changed by War: Unemployment Disappears
The war virtually ended unemployment in America. The need for workers led manufacturers to hire women, teenagers, the aged, and minorities previously excluded by discrimination from sectors of the economy. Plentiful overtime work contributed to rising wages and increased savings.

Military and economic expansion created labor shortages. To fill the gap, government and industry encouraged women to enter the workforce. Though most working women continued to labor in more traditional employment like waitressing and teaching, millions took better-paid jobs in defense factories.

African Americans and other minorities also took high-paying industrial jobs previously reserved for whites. In 1941, black labor leader A. Philip Randolph threatened to organize a protest march on Washington, D.C. if the government didn’t bar racial discrimination in defense plants with government contracts. Faced with this threat, President Roosevelt banned such discrimination and created the Fair Employment Practices Commission (FEPC) to investigate bias charges.

Millions of women, including many mothers, entered the industrial workforce during the war. They found jobs in especially large numbers in the shipbuilding and aircraft industries. “Rosie the Riveter” became a popular symbol of patriotic womanhood. Though defense jobs paid far more than traditional “female” occupations, women were still often paid less than men performing comparable work. Moreover, at war’s end, women were expected to leave the factories to make way for returning male veterans.


HIGGINS BOATS

Higgins Industries designed and built two basic classes of military craft.

The first was landing craft, constructed of wood and steel and used to transport fully armed troops, light tanks, field artillery, and other mechanized equipment and supplies to shore. These boats helped make the amphibious landings of World War II possible.

Higgins also designed and manufactured supply vessels and specialized patrol craft, including high-speed PT boats, antisubmarine boats, and dispatch boats.

LCVP (Landing Craft, Vehicle, Personnel)
The LCVP was the most famous landing craft designed and produced by Higgins Industries. It could land soldiers, and even jeeps, on a beach. LCVPs were used in North Africa, Europe, and the Pacific during the war.

From the Eureka…
The LCVP (Landing Craft, Vehicle, Personnel), the best-known landing craft designed by Andrew Higgins, evolved from a boat he created before the war for use in the swamps and marshes of Louisiana. Trappers and oil companies needed a rugged, shallow-bottomed craft that could navigate these waters, run aground, and retract itself without damaging its hull. Higgins developed a boat that could perform all these tasks: a spoonbill-bowed craft he called the Eureka. Over time he modified and improved his craft and found markets for it in the United States and abroad.

…to the LCP(L)
During the 1930s Higgins tried to interest the U.S. Navy in adapting his shallow-draft Eureka for use as an amphibious landing craft. The navy showed little interest, but Higgins persisted. After a long struggle, he finally secured a government contract to build modified Eurekas for military use. The new boat was called the LCP (Landing Craft, Personnel) and, later, the LCP(L) (Landing Craft, Personnel, Large). In its most advanced form the LCP(L) measured 36 feet in length. It could transport men from ships offshore directly onto a beach, then retract itself, turn, and head back to sea.

The LCVP (Landing Craft, Vehicle, Personnel) was developed because the U.S. Marines needed a boat capable of transporting vehicles to shore. Higgins adapted the LCP(L) to meet this requirement. He replaced the LCP(L)’s rounded bow with a retractable ramp. The new craft was tested for the first time on May 26, 1941, on Lake Pontchartrain. It carried a truck and 36 Higgins employees safely to shore. The LCVP became the military’s standard vehicle and personnel landing craft. Thousands were in service during the war.

New Orleans” Home of the Higgins Boats
“If Higgins had not designed and built those LCVPs, we never could have landed over an open beach. The whole strategy of the war would have been different.” 
–General Dwight D. Eisenhower

The city of New Orleans made a unique and crucial contribution to America’s war effort. This was the home of Higgins Industries, a small boat company owned by a flamboyant entrepreneur named Andrew Jackson Higgins. The story of Higgins’ role in the war is little known today, but his contribution to the Allied victory was immeasurable.

World War II presented Allied war planners with a tactical dilemma–how to make large amphibious landings of armies against defended coasts. For America this was a particularly thorny problem, since its armed forces had to mount amphibious invasions at sites ranging from Pacific atolls to North Africa to the coast of France.

Higgins’ contribution was to design and mass-produce boats that could ferry soldiers, jeeps, and even tanks from a ship at sea directly onto beaches. Such craft gave Allied planners greater flexibility. They no longer needed to attack heavily defended ports before landing an assault force. Higgins’ boats were used in every major American amphibious operation of World War II. His achievements earned him many accolades. The greatest came from General Dwight D. Eisenhower, who called Higgins “the man who won the war for us.”

From the Bayou to the Battlefront
Before World War II Andrew Higgins operated a small boatyard, building workboats designed to operate in the shallow waters of Louisiana’s bayous. During the 1920s and 1930s America’s military began exploring ways to make amphibious landings. Higgins became involved in this effort, adapting designs for shallow-draft boats he had developed for peacetime uses. His company created amphibious assault craft capable of shuttling men and equipment quickly and safely from ship to shore. When the war came, business boomed. Higgins built new factories with mass production lines and employed thousands of workers. He even opened a training school for boat operators.

New Orleans Naval Giant During World War II Higgins Industries grew from a small business operating a single boatyard into the largest private employer in Louisiana. The company turned out astounding numbers of boats and ships. In September 1943 the US Navy had 14,072 vessels. Of these, 8,865 had been designed and built by Higgins Industries.

Where Norman Rockwell art played a role

The Four Freedoms speech delivered on January 6, 1941

Roosevelt’s hope was to provide a justification for why the United States should abandon the isolationist policies that emerged from World War I. The speech coincided with the introduction of the Lend-Lease Bill, which furthered Roosevelt’s plan to become the “arsenal of democracy” and support the allies (primarily the British) with much needed supplies necessary. Furthermore, the speech established what would become the ideological basis for United States involvement in World War II, all framed in terms of the individual rights and freedoms that are the hallmark of American politics.

Lend Lease Act

This bill authorized the president to “sell, transfer title to, exchange, lease, lend, or otherwise dispose of, to any government (the defense of which the President deems vital to the defense of the United States) any article of defense.” In effect, it allowed Pres. Roosevelt authorized the transfer of military materials to Great Britain with the understanding that they would someday be repaid or returned if they were not destroyed. To administer the program, Roosevelt created the Office of Lend-Lease Administration under the leadership of former steel industry executive Edward R. Stettinius.
To sell the idea of ​​the program to a skeptical and still somewhat isolationist American public, Roosevelt likened it to lending the hose to a neighbor whose house was on fire. “What do I do in a crisis like this?” the president asked the press. “I don’t say… ‘Neighbor, my garden hose costs me $15, you have to pay me $15 for it’ – I don’t want $15 – I want my garden hose back after the fire is over .” In April, he expanded the program, offering Lend-Lease aid to China in its war against the Japanese. Quickly taking advantage of the program, the British received over $1 billion in aid by October 1941.

The speech given by President Roosevelt incorporated the following

“In the future, the days we seek to make safe today, we do so by looking forward to a world founded on four essential human freedoms. The first is freedom of opinion and expression – everywhere in the world. The second is the freedom of each person to worship God in his own way – everywhere in the world. The third is freedom from want, which, translated into global terms, means economic understandings that can guarantee each nation a life of healthy peace for its inhabitants – everywhere in the world. The fourth is freedom from fear, which, translated into global terms, means a worldwide reduction of armaments to such an extent and so completely that no nation will be in a position to commit an act of physical aggression against any neighbor – anywhere of the world. This is not a vision from a distant millennium. It is a definitive basis for a kind of possible world in our own time and generation. This type of world is the very antithesis of the so-called new order of tyranny that dictators seek to create as if they were throwing a bomb. “-Franklin D. Roosevelt, excerpted from the State of the Union Address to Congress, January 6, 1941.

The flag of the four freedoms or “United Nations Honor Flag” ca. 1943-1948

The declaration of the four freedoms as a justification for the war would resonate throughout the duration of the war and for decades to come, more as a framework to remember. The four freedoms became the main binder of America’s war aims and the core of all attempts to gain public support for the war. With the creation of the Office of War Information (1942), as well as Norman Rockwell’s famous paintings, the four freedoms were heralded as values ​​central to life and examples of American exceptionalism. This did not occur in a logical sequence as later suggested, as the government rejected Norman Rockwell’s offer, but as the paintings had the effect of making the public understand and became popular, they were incorporated.

Initial rejection of Norman Rockwell’s depiction of the four freedoms:

This twist that looked from today’s perspective seems absurd is an excellent example how thing happen in reality when dealing with human affairs and deserves a detailed account:

The four freedoms speech was a great success and these objectives would be central to the development of post-war human rights policy. However, in 1941 the speech received strong criticism from isolationists and many conservatives in Congress. Critics argued that the four freedoms were simply a charter for Roosevelt’s New Deal, social reforms that had already created deep divisions within Congress. Conservatives who opposed social programs and increased government intervention argued against Roosevelt’s attempt to justify and use the war as necessary to defend liberal policies.
While Liberties became a force in American thinking about war, it was never the exclusive justification for war. Research and surveys conducted by the Office of War Information (OWI) revealed that “self-defense” of American values, and revenge for Pearl Harbor were the most common reasons for war. Although Roosevelt sought to use the four freedoms as a counter ideology to fascism and as a force to mobilize an apathetic nation for war in Europe, the record indicates that Americans were more concerned with their own personal experience than with liberal humanitarianism.

Rockwell approached the Office of War Information (OWI) with his idea to use the paintings as part of the government’s war effort, hoping they would help promote the ideals for which the U.S. was fighting. The Office of War Information OWI initially rejected Rockwell’s paintings, feeling that his approach was too folksy and traditional for the government’s purposes. They were looking for more modern and abstract forms of propaganda to mobilize public support for the war.

Undeterred by the government’s rejection, Rockwell turned to The Saturday Evening Post, then a widely read magazine where he regularly contributedk and The Four Freedoms series was published in four consecutive issues in 1943, accompanied by essays from contemporary writers.

Freedom of Worship

You can read it in full, but here are some excerpts:

Why are we religious?

Man differs from the animal in two things: He laughs, and he prays. Perhaps the animal laughs when he plays, and prays when he begs or mourns; we shall never know any soul but our own, and never that. But the mark of man is that he beats his head against the riddle of life, knows his infinite weakness of body and mind, lifts up his heart to a hidden presence and power, and finds in his faith a beacon of heartening hope, a pillar of strength for his fragile decency.

Religion like music lives in a world beyond words or thoughts or things. These experiences feel the mystery of consciousness within themselves, and will not say that they are machines. These experiences reveal the growth of the soil and the child, they inhabit the fear and reverence of the swelling of the fields, in the hum that permeates the forest, and they perceive in each cell and atom the same creative power that springs from their own effort and achievement. Their impassive faces hide silent gratitude for the arrival of summer, the deadly beauty of autumn, and the joyful resurrection of spring. They have patiently watched the movement of the stars, and find them in a majestic order so harmoniously regular that our ears would bear their music if it were not eternal. Their weary eyes have known the ineffable splendor of earth and sky, even in storm, terror and destruction, and have never doubted that in this beauty there is some sense and meaning. They saw death, and looked beyond it with their hope

Based on this intuition and anticipating a time when many would say they were “spiritual but not religious” Durant offers the following:

And so they worship. The poetry of their ritual redeems the prose of their daily toil; the prayers they pray are secret summonses to their better selves; the songs they sing are shouts of joy in their refreshened strength. The commandments they receive, through which they can live with one another in order and peace, come to them as the imperatives of an inescapable deity, not as the edicts of questionable men. Through these commands they are made part of a divine drama, and their harassed lives take on a scope and dignity that cannot be canceled out by death.

Freedom from fear

Read it in full. Here some excerpts:

What do we mean when we say “freedom from fear”? It isn’t just a formula or a set of words. It’s a look in the eyes and a feeling in the heart and a thing to be won against odds. It goes to the roots of life — to a man and a woman and their children and the home they can make and keep.

Fear has walked at man’s heels through many ages — fear of wild beasts and wilder nature, fear of the inexplicable gods of thunder and lightning, fear of his neighbor man.

He saw his rooftree burned with fire from heaven — and did not know why. He saw his children die of plague — and did not know why. He saw them starve, he saw them made slaves. It happened — he did not know why. Those things had always happened.

Since our nation began, men and women have come here for just that freedom — freedom from the fear that lies at the heart of every unjust law, of every tyrannical exercise of power by one man over another man. They came from every stock — the men who had seen the face of tyranny, the men who wanted room to breathe and a chance to be men. And the cranks and the starry-eyed came, too, to build Zion and New Harmony and Americanopolis and the states and cities that perished before they lived — the valuable cranks who push the world ahead an inch. And a lot of it never happened, but we did make a free nation.

It is not enough to say, “Here, in our country, we are strong. Let the rest of the world sink or swim. We can take care of ourselves.” That may have been true at one time, but it is no longer true. We are not an island in space, but a continent in the world. While the air is the air, a bomb can kill your children and mine. Fear and ignorance a thousand miles away may spread pestilence in our own town. A war between nations on the other side of the globe may endanger all we love and cherish.

We who are alive today did not make our free institutions. We got them from the men of the past, and we hold them in trust for the future. Should we put ease and selfishness above them, that trust will fail and we shall lose all, not a portion or a degree of liberty, but all that has been built for us and all that we hope to build. Real peace will not be won with one victory. It can be won only by long determination, firm resolve, and a wish to share and work with other men, no matter what their race or creed or condition. And yet, we do have the choice. We can have freedom from fear.

Here is a house, a woman, a man, their children. They are not free from life and the obligations of life. But they can be free from fear. All over the world, they can be free from fear. And we know they are not yet free.

Freedom of Speech

Read in full. Some excerpts

In a small chalet on the mountain road from Verona to Innsbruck, two furtive tourists sat, pretending not to study each other. Outdoors, the great hills rose in peace that summer evening in 1912; indoors, the two remaining patrons, both young, both dusty from the road, sat across the room from each other, each supping at his own small table.

Actually they are Adolf Hitler, then a painter and Benito Mussolini, then a journalist.

The text shrewdly introduces their personalities, but perhaps a summary was that:

“Greatness is easily mistaken for insanity,” the swarthy young man said. “Greatness is the ability to reduce the most intricate facts to simple terms. For instance, take fighting. Success is obtained by putting your enemy off his guard, then striking him where he is weakest — in the back, if possible. War is as simple as that.”

“Yes, and so is politics,” the painter assented absently as he ate some of the fruit that formed his supper. “Our mutual understanding of greatness helps to show that we are not lunatics, but only a simple matter of geography is needed to prove our sanity.”

“Geography?” The journalist didn’t follow this thought. “How so?”

“Imagine a map.” The painter ate a grape. “Put yourself in England, for instance, and put me and my dazzling ideas into that polyglot zoo, the United States of America. You in England can bellow attacks on the government till you wear out your larynx, and some people will agree with you and some won’t, and that is all that would happen. In America I could do the same. Do you not agree?”

“Certainly,” the journalist said. “In those countries the people create their own governments. They make them what they please, and so the people really are the governments. They let anybody stand up and say what he thinks. If they believe he’s said something sensible, they vote to do what he suggests. If they think he is foolish, they vote no. Those countries are poor fields for such as you and me, because why conspire in a wine cellar to change laws that permit themselves to be changed openly?”

“Exactly.” The watercolor painter smiled his faint strange smile. “Speech is the expression of thought and will. Therefore, freedom of speech means freedom of the people. If you prevent them from expressing their will in speech, you have them enchained, an absolute monarchy. Of course, nowadays he who chains the people is called a dictator.”

Freedom From Want

Read in full. Some excerpts:

We march on, though sometimes strange moods fill our children. Our march toward security and peace is the march of freedom — the freedom that we should like to become a living part of. It is the dignity of the individual to live in a society of free men, where the spirit of understanding and belief exist; of understanding that all men are equal; that all men, whatever their color, race, religion or estate, should be given equal opportunity to serve themselves and each other according to their needs and abilities.

But we are not really free unless we use what we produce. So long as the fruit of our labor is denied us, so long will want manifest itself in a world of slaves. It is only when we have plenty to eat — plenty of everything — that we begin to understand what freedom means. To us, freedom is not an intangible thing. When we have enough to eat, then we are healthy enough to enjoy what we eat. Then we have the time and ability to read and think and discuss things. Then we are not merely living but also becoming a creative part of life. It is only then that we become a growing part of democracy.

Outcome

Before we finalize that with other contributions of Norman Rockwell to the war effort, let´s see how all that strategy led to the victory of the Allies and the end of WW2:

Eleven months after this publication, on December 8, 1941, the USA declared war on Japan, one day after the attack on Pear Harbor which was, in my opinion, the most significant event of the 20th century. Three days later, on December 11, Germany and Italy declared war on the United States.
The atomic bomb was dropped on Hiroshima on August 6, 1945 and on Nagasaki on August 9.
Japan surrendered unconditionally (it had never lost a war) on September 2nd (after announcing this on August 15th).
Mussolini was executed on April 28th, Hitler committed suicide on the 30th of the same month.
Germany surrendered little by little, through the commanders in chief, who each negotiated separately.
On May 1st in Italy, on May 2nd in Berlin, on May 4th in Northern Germany, Denmark and the Netherlands and also in Bavaria and Central Europe.
Goring, second in chief to Hitler, surrendered on the 6th.
On the same day, the 6th, the fortress city of Breslau, surrounded by the Russians, surrendered to them. On the 6th and 8th the forces that were in the Channel Islands surrendered to what Churchill announced in a radio address at 15:00 on the same day that “Hostilities will officially cease one minute after midnight today.”
Jodl and Keitel surrendered unconditionally on behalf of all German forces thirty minutes after the fall of the Breslau fortress, signing on the morning of May 7th their unconditional surrender to the Americans, repeating the same to the Russians on the 8th.
The 8th was V day for the Americans, but as it was the 9th in Russia, they celebrate the end of the war on this date.
The division of Germany into 4 parts, governed by the Americans, Russians, England (UK) and France, was signed on June 5th.
Truman would announce the cessation of hostilities between the US and Germany on December 13, 1946.
The peace treaty between the allies and the axis countries was signed on February 10, 1947. The Federal Republic of Germany was founded on May 23, 1949, its first government formed on September 20 of the same year. On November 22, the war allies formally declared the end of the state of war with Germany on November 22, 1949. The full authority of a sovereign state was granted on May 5, 1955, with special powers for England, USA and Russia, which would disappear completely on March 15, 1991.

In Japan it was a little different and it’s worth a word.

They had never lost a war, as I said, and surrendering was shameful and cowardly, as the Samurai code embedded in military culture decidedly rejected the idea of ​​surrender, with the implication being that the defeated were at the mercy of the victorious and they themselves never demonstrated mercy towards their vanquished, and the way they raped, plundered and plundered, including enslaving the people they dominated, was famous.
They imagined they would suffer the same thing, made worse by the fact that they have a culture of committing suicide if they don’t get their way, which indicated serious problems for the allies in how to resolve all this. This was what was behind why they didn’t want to surrender. For us it is impossible to imagine what they felt (or feel) for the emperor, who is the human embodiment of the Japanese nation, its culture and civilization and for whom they were willing to die, perhaps in a collective suicide.
In other words, if the emperor ceased to exist, Japan would cease with it.
In a rare moment of common sense, the Americans understood this, or perhaps they had already understood this in another rare moment of madness, perhaps the greatest of all that a human being can have, when dropping the bombs, and President Truman guaranteed in writing that the Japan would not be enslaved, the emperor would continue to reign, under the authority of the allied command, General MacArthur. In view of this, they finally surrendered unconditionally and the document that I transcribe below was signed.
A curious thing about this document, which reveals a lot about nature and the human nature, is that the copy in Japan’s possession and the one in the USA differ in the following:
The allies’ copy was presented in leather with gold lining and with stamps from both countries printed on the front. The Japanese copy was made on rough canvas without stamps on the front. The Canadian Representative (who was blind in one eye) signed below rather than above the line and created a problem where everyone signed on the wrong line to the one intended for them and the Japanese objected. When the discrepancy was pointed out to General Sutherland, (MacArthur’s chief of staff) he crossed out the pre-printed names of the Allied nations and rewrote the titles by hand in their correct relative positions. This change was initially not accepted by the Japanese, whereupon Sutherland then initialed each change (with an abbreviated signature). Faced with this, the Japanese representatives did not object any further.
Japan was occupied for the first time in its history and was transformed into a democracy and somehow followed the model of President Roosevelt’s New Deal.
On September 8, 1951, the occupation ended, which officially ended on April 28, 1952, when Japan once again became an independent country, except for the Ryuku Islands.
Japan would be divided as Germany was, and it is historically unclear why this did not occur. Apparently, it was Truman who accomplished this.
Russia got North Korea and the Kuril Islands.
The US took South Korea, Okinawa, the Amami Islands, the Ogasawara Islands, and the Japanese possessions of Micronesia. China got Taiwan and Penghu.

Pearl Harbour nowadays (1995)

Me, REC visiting Pearl Harbour in 1995

Other contributions of Norman Rockwell to the war effort

Norman Rockwell’s contributions to the war effort through his art extended beyond the Four Freedoms. His works captured the spirit of the American people during a challenging time, promoting patriotism, resilience, and the importance of supporting the war effort both on the home front and abroad. These paintings and magazine covers remain iconic representations of World War II and the collective American experience during that period.

War Effort Paintings and Magazine Covers by Norman Rockwell

1. Rosie the Riveter (1943)

  • Description: Depicts a strong, confident woman taking a lunch break with a riveting gun in her lap and her foot on a copy of Hitler’s Mein Kampf. This painting became an iconic representation of the women who worked in factories during the war.
  • Published: The Saturday Evening Post, May 29, 1943

2. The Homecoming (1945)

  • Description: Shows a soldier returning home and being warmly greeted by his family and neighbors, capturing the joy and relief of the war’s end.
  • Published: The Saturday Evening Post, May 26, 1945

3. Liberty Girl (1943)

  • Description: Features a patriotic young woman dressed in red, white, and blue, surrounded by symbols of American industry and war effort, such as tools and factory machinery.
  • Published: The Saturday Evening Post, September 4, 1943

4. Potato Peeler. (1942)

Potato Peeler
  • Description: The painting depicts a U.S. Army private sitting on a crate and peeling potatoes with a knife. The soldier appears cheerful and content, suggesting a sense of duty and normalcy even in mundane tasks. This painting highlights the everyday life of soldiers and the importance of even the most routine tasks in the war effort.
  • Published: The Saturday Evening Post, August 15, 1942.

5. Let’s Give Him Enough and On Time (1942)

  • Description: Part of a series of posters encouraging increased production and efficiency in war industries.
  • Commissioned by: The War Production Board

6. “War Bond” (1944)

  • Description: Shows a soldier participating in Christmas festivities, emphasizing the importance of morale and holiday spirit during wartime.
  • Published: Saturday Evening Post Cover, July,1944

7. War Bonds Posters (1941-1945)

  • Description: Rockwell created several posters encouraging Americans to buy war bonds to support the war effort financially. These posters often featured patriotic themes and imagery.

8. A Family Tree (1943)

  • Description: Illustrates a genealogical tree showing the descendants of a POW Norman Rockwell created for ilustrative purposes
  • Published: The Saturday Evening Post, September 16, 1944

9. Family tree (1942)

  • Description:llustrates a genealogical tree showing the descendants of a pirate, including soldiers and sailors from various American wars.
  • Published: The Saturday Evening Post, December 26, 1942

10. War Stories October 13, 1945

  • Description: Shows a young soldier telling stories about the war
  • Published: The Saturday Evening Post, November, 1942

There were many more, but these samples are enough to give an idea of the extension of his influence in the American imagination.  

Last but not least

The United States’ emergence as a dominant global power following World War II

The United States’ rise to global dominance after World War II brought about significant economic, military, political, cultural, technological, social, and geopolitical changes. These consequences reshaped the global landscape and established the U.S. as a leading power in various domains, influencing international affairs and the world order for decades to come.

1. Economic Consequences

Post-War Economic Boom:

  • Economic Growth: The U.S. experienced a period of unprecedented economic prosperity in the post-war years. Industrial production, consumer spending, and technological innovation soared, leading to a higher standard of living.
  • Global Economic Leadership: The U.S. dollar became the world’s primary reserve currency, and the U.S. played a central role in establishing global economic institutions like the International Monetary Fund (IMF) and the World Bank.

Sources:

2. Military Consequences

Global Military Presence:

  • Military Bases: The U.S. established numerous military bases around the world, solidifying its global presence and ability to project power.
  • Nuclear Arsenal: The development and stockpiling of nuclear weapons positioned the U.S. as a superpower in the nuclear age, leading to the arms race during the Cold War.

Sources:

3. Political Consequences

Cold War Leadership:

  • Containment of Communism: The U.S. adopted a policy of containment to prevent the spread of communism, leading to various conflicts and interventions, including the Korean War, the Vietnam War, and numerous other Cold War confrontations.
  • NATO and Alliances: The formation of NATO and other military alliances strengthened U.S. political and military influence across Europe and other parts of the world.

Sources:

4. Cultural Consequences

Cultural Influence:

  • Hollywood and Media: American culture, particularly through Hollywood films, music, and television, spread globally, influencing lifestyles, fashion, and cultural norms.
  • Soft Power: The U.S. exerted significant “soft power” through cultural diplomacy, promoting American values of democracy, freedom, and capitalism.

Sources:

5. Technological and Scientific Consequences

Technological Leadership:

  • Space Race: The U.S. invested heavily in science and technology, exemplified by the space race and the moon landing in 1969.
  • Innovation: Advancements in technology, medicine, and engineering positioned the U.S. as a leader in innovation and research.

Sources:

6. Social Consequences

Civil Rights Movement:

  • Racial Equality: The post-war period saw significant advancements in civil rights, culminating in landmark legislation like the Civil Rights Act of 1964 and the Voting Rights Act of 1965.
  • Social Change: The war and its aftermath also catalyzed changes in gender roles and expectations, contributing to the feminist movement.

Sources:

7. Geopolitical Consequences

Bipolar World Order:

  • U.S.-Soviet Rivalry: The U.S. emerged as one of the two superpowers, leading to a bipolar world order characterized by the Cold War rivalry with the Soviet Union.
  • Influence in International Affairs: The U.S. took a leading role in international organizations like the United Nations, shaping global policies and responses to international crises.

Sources:

Artistic Styles of Paintings.

In the history of art, there are numerous styles and movements, each with its own distinctive characteristics, philosophies, and influences. Here are some of the most notable styles and movements:

Major Styles and Movements in Art

1. Prehistoric Art:

  • Description: Includes cave paintings and megalithic structures.
  • Examples: Lascaux cave paintings, Stonehenge.
  • How it all started

2. Ancient Art:

  • Description: Art from early civilizations, often religious or mythological.
  • Examples: Egyptian hieroglyphs, Greek sculptures.

3. Medieval Art:

  • Description: Art from the Middle Ages, characterized by religious themes and Gothic architecture.
  • Examples: Byzantine mosaics, Gothic cathedrals.

4. Renaissance:

  • Description: Rebirth of classical ideas, humanism, and naturalism.
  • Examples: Works by Leonardo da Vinci, Michelangelo.
  • Renaissance Art

5. Baroque:

  • Description: Dramatic, detailed, and elaborate art and architecture.
  • Examples: Caravaggio’s paintings, Bernini’s sculptures.
  • Baroque Art

6. Rococo:

  • Description: Ornate, playful, and light art, often with pastel colors.
  • Examples: Works by François Boucher, Jean-Honoré Fragonard.

7. Neoclassicism:

  • Description: Revival of classical style, emphasizing simplicity and symmetry.
  • Examples: Jacques-Louis David’s paintings, Thomas Jefferson’s architecture.
  • Neoclassicism in painting

8. Romanticism:

  • Description: Emphasis on emotion, nature, and individualism.
  • Examples: Works by Caspar David Friedrich, Francisco Goya.
  • Romanticism in painting

9. Realism:

10. Impressionism:

  • Description: Focus on light, color, and everyday scenes, often with visible brush strokes.
  • Examples: Claude Monet, Edgar Degas.

11. Post-Impressionism:

  • Description: Diverse reactions to Impressionism, emphasizing form and structure.
  • Examples: Vincent van Gogh, Paul Cézanne.

12. Symbolism:

  • Description: Use of symbolic imagery to convey deeper meanings and emotions.
  • Examples: Gustave Moreau, Odilon Redon.

13. Art Nouveau:

  • Description: Decorative art with organic, flowing lines and natural forms.
  • Examples: Works by Alphonse Mucha, Antoni Gaudí.

14. Fauvism:

  • Description: Use of bold, vibrant colors and expressive brushwork.
  • Examples: Henri Matisse, André Derain.

15. Expressionism:

  • Description: Emphasis on representing emotional experience rather than physical reality.
  • Examples: Edvard Munch, Egon Schiele.

16. Cubism:

  • Description: Abstracted forms, fragmented objects into geometric shapes.
  • Examples: Pablo Picasso, Georges Braque.

17. Futurism:

  • Description: Emphasized speed, technology, and dynamic movement.
  • Examples: Umberto Boccioni, Giacomo Balla.

18. Dadaism:

  • Description: Anarchic and anti-establishment, often absurd and satirical.
  • Examples: Marcel Duchamp, Hannah Höch.

19. Surrealism:

  • Description: Focus on the unconscious mind, dream-like scenes, and fantastical imagery.
  • Examples: Salvador Dalí, René Magritte.

20. Abstract Expressionism:

  • Description: Emphasized spontaneous, automatic, or subconscious creation.
  • Examples: Jackson Pollock, Mark Rothko.

21. Pop Art:

  • Description: Drew inspiration from popular culture and mass media.
  • Examples: Andy Warhol, Roy Lichtenstein.

22. Minimalism:

  • Description: Focus on simplicity and purity of form, often with limited color palette.
  • Examples: Donald Judd, Frank Stella.

23. Conceptual Art:

  • Description: Focus on ideas and concepts rather than aesthetic objects.
  • Examples: Sol LeWitt, Joseph Kosuth.

24. Contemporary Art:

  • Description: Art produced in the late 20th century and onwards, encompassing diverse styles and media.
  • Examples: Damien Hirst, Jeff Koons.

Sources and Further Reading

This list covers a broad spectrum of artistic styles and movements, illustrating the rich and varied history of art. Each movement has its own unique characteristics and contributions to the evolution of art.

MU as ultimate reality

This chinese ideogram came to my attention because in a documentary about the japanese film director Ozu Yasugiro it showed that it was inscribed in his tomb.

To figure out what is at stake in Mu’s ideogram which is engraved at Ozu Yasugiro‘s grave, let’s take a look at his films. Since it is not practical, let´s rather see Tokyo Story, which is the hallmark of his accomplishments:

His focus is family life in Japan, but his characters in his films are very much universal. In Ozu’s films, particularly this one Tokyo story, you can see yourself and those surrounding you, especially the loved ones. A film made in 1953, yet is eternal and practically anyone, adult, can relate to it and encounter with oneself.

Allthough Ozu made movies since the silent films in the 20’s he also did talkies and modern technology coloured movies up to the 60’s (which can be seen in the list above) until he died in 1963.

He was considered too japanese to interest anyone outside Japan by the japanese distributors and they never cared to export his movies, making him unknown outside Japan until the 80’s. His films could be seen in some specific Japanese cultural centers abroad and Wim Wenders tells us that an american housewife (whose name he didn’t remember) saw some of his movies in a Brooklyn, NY, japanese cultural center and she was so impressed that made a mission of her life to make Ozu’s films available to the american audience. Let’s hear it from Wim Wenders in a class he did in 2019: 

In this class, he discussed aspects of filmaking that because of the technological revolution which digitalization brought to the industry, Ozu’s movies became even more important and valuable and he explained why:

Before the advent of digitalization, or the digital revolution, the equipment, cameras, films, laboratory procedures to reveal the film strips, etc. were very expensive and not at the reach of the average person. Today, anybody can make a film and he knows young 18 years old kids who produced 90 minutes movies. This is an opportunity and a danger, because with this inflation came a devaluation that shatters what he calls sacredness of the images which nowadays are produced carelessly and at a speed which are entirely different from the era before digitalization. The cinema, as an entitity changed completely and something which was precious is lost. Everything in a Ozu film has some sort of hollinness in it and is carefully placed there. The effect can be seen for example when Ozu creates the image of a father, or the mother, or a famil, he does it in such a fashion that you are seeing a kind of archetype image, i.e., the father, the mother and the family of them all. This is what makes his movies universal and timeless and above all, you feel it under your skin or with your heart and there is no need for an explanation for you to perceive it.

As he told, he went to Japan in 1983 and filmed a lot of images and related subjects to Ozu and he issued a documentary about Ozu which recently has become available worldwide (2024) and can be seen at Prime video. It is not yet available at Youtube, but you can see the trailer:

To grasp why he is so acclaimed, let´s take a look at the documentary Talking with Ozu (1993), a tribute to Yasujiro Ozu featuring Lindsay Anderson, Claire Denis, Hou Hsiao-hsien, Aki Kaurismäki, Stanley Kwan, Paul Schrader and Wim Wenders:

All these directors give their impressions which can be summarized the following way:
It is like reflections on a mirror. What does Ozu Yasujiro reflect? He reflects reality. But, what actually is reality? Reality is a “work in progress”, i.e., an unfinished project that is still being added to or developed.

Ozu not only grasps this ever changing characteristic but he manages to create a story with images that have the power to bring up life experience as most people live it, or what life is about, specially parents, family and children, which are themes that affect us all.

Another feature is that he does that in a personal way, i.e., these directors and I imagine most viewers, relate their own personal life experiences with those Ozu puts up there on the screen. Perhaps this has to do with his straightforward way to tell the story with a mix of honesty and irony which brings his special kind of humour which is never sarcastic.

Although his films were made a long time ago, there is also an agreement that his films do not become old fashioned or out of date.

It is very important to notice that he managed to discuss all important aspects of life without recurring to murder or violence, which seems to be the hallmark of modern film making, especially in the United States.

His insights on the human condition are perhaps one of the best ever yet attained by cinema.

I discuss reality separately under discursive perspective and it becomes clear the value of  Ozu’s approach to reality, which is what reality is as you can feel it and not an endless discussion tied to some kind of reference that cannot reach any conclusion. This is what made him great and so important, because he manages to make you feel as “the real thing” which Wim Wenders so aptly described in his documentary about him and I quote: 

“Each person knows  for himself what is meant by the perception of reality. Each person sees his reality with his own eyes. When sees the others, above all the  people one loves, when sees the objects surrounding oneself, sees the cities and countrysides where each one lives, when also sees death, men’s mortality and transitoriness of all things, when sees and experiences love, loneliness, happiness,  sadness, fear, in short, each person sees for himself: life. And each person knows for himself the extreme gap that  often exists between personal experience and  the depiction of that experience up there on the screen. We have learned to consider the vast distance separating cinema from life as so perfectly natural that we gasp  and give a start, when we suddenly discover something true or real  in a movie. Be it nothing more  than the gesture of a child in the background or a bird flying across the frame or a cloud casting its shadow over the scene for but an instant. It is a rarity in today’s cinema to find such moments of truth. For people or objects to show themselves as they really are. That’s what was so unique in Ozu’s films. and above all in his later ones.  There were such moments of truth. No, not just moments, a long range of truths  lasting from the first image to the last.Films which actually and continuously  dealt with life itself and in which the people, the objects, the cities and the countrysides  reveal themselves. Such a depiction of reality, such an art is no longer to be found in cinema. It was once. Mu. nothingness. What remains today.”

Ozu Yasujirō’s Grave Tombstone with the “Mu” Chinese Ideogram

Ozu Yasujirō, one of Japan’s most esteemed filmmakers, is buried at the Engaku-ji Temple in Kamakura, Japan. His grave is marked by a simple tombstone inscribed with the Chinese character “無” (Mu), which translates to “nothingness” or “emptiness.” This choice of inscription reflects profound philosophical and spiritual meanings.

Significance of “Mu”

  1. Philosophical Context:
    • Buddhism: In Zen Buddhism, “Mu” signifies a fundamental concept of emptiness or void. It denotes a state of being that is free from desires, attachments, and illusions, representing the ultimate reality beyond the duality of existence and non-existence.
    • Zen Koan: The character “Mu” is famously used in a Zen koan (a paradoxical statement or question used in Zen practice to provoke deep thought and enlightenment). One well-known koan involves a monk asking, “Does a dog have Buddha nature?” to which the Zen master Joshu replies, “Mu,” indicating that the question is beyond conventional logic and dualistic thinking.
    • It is what we call loaded question, i.e., the context does not allow an answer what is more easily understood as “wrong question”
    • Source: Stanford Encyclopedia of Philosophy – Zen Buddhism, The Zen Koan as a Means of Attaining Enlightenment
  2. Personal Philosophy:
    • Minimalism: The choice of a simple tombstone with a single character aligns with Ozu’s minimalist aesthetic in his films, which often feature restrained visual style and subtle narrative techniques. This minimalism in both his work and his final resting place underscores a focus on the essence of things rather than their external complexities.
    • Life and Death: The inscription reflects Ozu’s contemplation on life’s transience and the nature of existence. In his films, he frequently explored themes of impermanence, family dynamics, and the passage of time, all of which resonate with the idea of “Mu” as a state of acceptance and letting go.
    • Source: Bordwell, D. (1988). “Ozu and the Poetics of Cinema”, Richie, D. (1974). “Ozu: His Life and Films”
  3. Cultural and Artistic Impact:
    • Symbolism: “Mu” represents a broader cultural and artistic symbolism that has been integral to Japanese aesthetics. The concept of “Wabi-Sabi,” which finds beauty in imperfection and transience, is closely related to the philosophical idea of emptiness and simplicity expressed by “Mu.”
    • Reflection of Values: The choice of “Mu” encapsulates values that Ozu cherished, such as humility, simplicity, and a deep connection with the natural flow of life, which are evident in his cinematic portrayals of everyday life and human relationships.
    • Source: Japan Times – The Elegance of Japanese Aesthetics, National Geographic – Japanese Culture

Contextual and Cultural Significance

  1. Grave Location:
    • Engaku-ji Temple: Ozu’s grave is located at Engaku-ji, a Zen Buddhist temple in Kamakura, which is known for its serene environment and historical significance in Japanese Zen Buddhism. The location itself complements the philosophical message of “Mu” due to its association with meditative practices and Zen teachings.
    • Source: Engaku-ji Temple Official Site
  2. Legacy:
    • Lasting Influence: The simplicity and depth of the tombstone’s message continue to inspire and intrigue filmmakers, scholars, and fans of Ozu’s work. It serves as a powerful reminder of his legacy and the philosophical depth that permeated his films.
    • Cultural Reference: The use of “Mu” has become a cultural reference point for understanding Ozu’s approach to life and art, reflecting a worldview that finds meaning in simplicity and the contemplation of existence.
    • Source: Criterion Collection – Ozu Yasujiro, British Film Institute – Ozu Yasujiro

Conclusion

Ozu Yasujirō’s choice of the “Mu” character for his tombstone encapsulates a profound philosophical statement that resonates with Zen Buddhist teachings and his personal artistic philosophy. It symbolizes a journey towards understanding the essence of life and reality, free from the distractions of materialism and superficiality. This simple yet profound inscription continues to reflect the depth of Ozu’s legacy in both cinema and philosophical thought.

For more detailed exploration:

Other contexts where the idea behind MU can be found

The concept of “Mu” in Zen Buddhism shares significant similarities with Via Negativa, both of which employ negation to transcend the limitations of human understanding and language. By negating what can be said about the divine or reality, both approaches aim to lead practitioners to a direct, experiential understanding that goes beyond intellectual grasping.

M C Escher

M C Escher impossible drawings are a kind of answer to a question which is more suited to the Mu concept and therefore unanswerable.

It is insightful to interpret M.C. Escher’s impossible drawings as related to the concept of “Mu” from Zen Buddhism, which represents a question that transcends conventional answers or highlights the limitations of binary thinking.

M.C. Escher’s Impossible Drawings

1. Nature of Escher’s Art:

  • Impossible Objects: Escher’s artwork often features paradoxical structures that cannot exist in three-dimensional space, such as the Penrose stairs in “Ascending and Descending” and the endless waterfalls in “Waterfall.”
  • Visual Paradoxes: These drawings challenge the viewer’s perception of reality and logical coherence, creating a sense of wonder and confusion.

The Concept of “Mu”

2. Definition of “Mu”:

  • Zen Buddhism: In Zen, “Mu” (無) translates to “no,” “not,” or “nothingness.” It is used to indicate the negation of a question that is based on faulty premises or binary logic.
  • Unanswerable Questions: The concept of “Mu” suggests that some questions do not have meaningful answers within the conventional framework of thinking and require transcending those limitations to achieve enlightenment.

Example: The famous koan “Does a dog have Buddha-nature?” answered with “Mu” indicates that the question itself is flawed and that true understanding lies beyond the yes/no dichotomy.

Linking Escher and “Mu”

3. Escher’s Art as Visual Koans:

  • Transcending Logic: Like Zen koans, Escher’s impossible drawings invite viewers to transcend logical thinking and experience the limits of rationality.
  • Mindfulness and Perception: Engaging with Escher’s work can be seen as a form of mindfulness practice, drawing attention to the nature of perception and the constructed nature of reality.

Philosophical and Artistic Implications

4. Challenging Perception:

  • Philosophical Inquiry: Both Escher’s art and the concept of “Mu” challenge the viewer to reconsider their understanding of reality and question the assumptions underlying their perceptions.
  • Art as Meditation: Viewing Escher’s work can be a meditative process, encouraging a deeper awareness of the mind’s role in constructing reality.

Sources:

Conclusion

Understanding M.C. Escher’s impossible drawings as analogous to the concept of “Mu” in Zen Buddhism provides a profound way to engage with his work. It frames these drawings as not just artistic curiosities but as tools for philosophical and meditative exploration, inviting viewers to experience the limits of logical thinking and the nature of perception.

Gödel’s Incompleteness Theorems

Relationship between the chinese ideogram MU and Gödel’s Incompleteness Theorems which demonstrate how statements within a mathematical system can refer to themselves, creating a loop of truth and unprovability.

The Concept of “Mu”

1. Definition:

  • “Mu” (無): In Zen Buddhism, “Mu” is often used to negate the premise of a question, implying that the question itself is flawed or that it transcends binary logic. It encourages thinking beyond conventional dualities and logical constraints.
  • Example: The Zen koan “Does a dog have Buddha-nature?” is answered with “Mu,” indicating that the question cannot be answered within the confines of conventional logic.

Sources:

2. Gödel’s Theorems:

  • First Theorem: In any consistent formal system that is capable of expressing arithmetic, there are true statements that cannot be proven within the system.
  • Second Theorem: No consistent system can prove its own consistency.
  • Self-Reference: Gödel’s work showed how statements within a mathematical system could refer to themselves, creating self-referential loops that lead to incompleteness and unprovability.

Sources:

Relationship Between “Mu” and Gödel’s Theorems

3. Transcending Conventional Logic:

  • Negation and Self-Reference: Both “Mu” and Gödel’s Theorems deal with the limits of conventional logic and binary thinking. “Mu” negates the premises of questions that cannot be answered within the framework of dualistic logic, while Gödel’s Theorems reveal the limitations within formal mathematical systems.
  • Example: Just as “Mu” responds to a question by indicating it transcends binary yes/no answers, Gödel’s Theorems demonstrate that within any sufficiently complex system, there are statements that elude true/false categorization within the system itself.

4. Paradox and Limitations:

  • Paradox: Both concepts embrace paradox as a fundamental aspect of understanding reality. “Mu” embraces the paradox of negating a question to reveal deeper truths, while Gödel’s Theorems show that systems of logic are inherently incomplete and cannot fully describe their own structure.
  • Limitations of Formal Systems: Gödel’s work aligns with the spirit of “Mu” by showing that some truths lie beyond formal provability, thus inviting a more holistic or transcendent approach to understanding.

5. Philosophical Implications:

  • Beyond Formalism: Both concepts encourage moving beyond strict formalism to grasp deeper truths. “Mu” encourages direct, experiential understanding, while Gödel’s Incompleteness Theorems suggest the necessity of meta-mathematical perspectives to understand the limits of formal systems.

Sources:

Conclusion

The Chinese ideogram “Mu” and Gödel’s Incompleteness Theorems both explore the limitations and paradoxes inherent in systems of logic and perception. “Mu” negates questions that are trapped within dualistic thinking, urging a transcendence of conventional logic, while Gödel’s Theorems highlight the inherent incompleteness of formal mathematical systems, suggesting that some truths lie beyond formal proof. Both concepts challenge us to rethink the nature of reality, truth, and understanding beyond the confines of traditional frameworks.

Johann Sebastian Bach’s Compositions

The interplay between Bach’s compositions, particularly his fugues and canons, and the concepts explored by Gödel and Escher can also be linked to the philosophical implications of the Chinese ideogram “Mu” (無). Here’s an elaboration on this relationship:

1. Recursive Structures in Music:

  • Fugues and Canons: Bach’s fugues and canons are prime examples of musical recursion and self-reference. In a fugue, a theme or subject is introduced and then developed in multiple, interweaving voices, creating a complex, recursive structure.
  • Fugues and Canons are not literally endless, but they exhibit recursive structures and techniques that can create the illusion of infinite continuation.
  • Example: In “The Art of Fugue,” Bach explores variations of a single theme in multiple ways, demonstrating how a simple motif can be transformed through recursive patterns.

Sources:

3. Visual Paradoxes and Recursion:

  • Impossible Structures: Escher’s drawings, like “Ascending and Descending” or “Relativity,” use visual paradoxes to challenge the viewer’s perception of space and logic, similar to how Bach’s music and Gödel’s theorems challenge auditory and mathematical perception.
  • Self-Reference: Escher’s works often include self-referential elements that loop back on themselves, creating endless cycles and paradoxes.

Sources:

The Chinese Ideogram “Mu” (無)

4. Concept of “Mu” in Zen Buddhism:

  • Negation and Transcendence: “Mu” negates the premises of questions that are limited by conventional logic, encouraging a transcendence of binary thinking. It signifies an answer that is beyond the dualistic “yes” or “no,” indicating a deeper, often ineffable reality.
  • Parallels in Bach’s Music: Just as “Mu” invites one to move beyond conventional answers, Bach’s recursive structures encourage listeners to experience music in a way that transcends straightforward narrative or linear progression. The music loops and interweaves, much like a Zen koan, prompting deeper reflection and insight.

Sources:

Integration of Concepts

5. Integrating Bach, Gödel, Escher, and “Mu”:

  • Transcending Limits: All these elements—Bach’s musical structures, Gödel’s mathematical theorems, Escher’s visual paradoxes, and the concept of “Mu”—invite a transcendence of traditional boundaries and encourage an exploration of the infinite and the paradoxical.
  • Exploration of the Infinite: Bach’s use of recursion and variation, Gödel’s demonstration of the inherent limitations of formal systems, Escher’s visual loops, and the philosophical negation of “Mu” all reflect a profound engagement with the infinite and the unprovable.

Sources:

Conclusion

The relationships between Bach’s recursive musical compositions, Gödel’s self-referential mathematical theorems, Escher’s visual paradoxes, and the concept of “Mu” in Zen Buddhism highlight a shared exploration of the limits of conventional understanding. Each in its own way challenges the observer or listener to transcend ordinary logic and perception, offering a richer, more complex appreciation of reality that embraces paradox and the infinite. This interweaving of ideas exemplifies how different disciplines can converge to deepen our understanding of the world and our place within it.

Reality

Reality is a complex and multi-dimensional concept that varies across different fields of inquiry. It can be understood through scientific investigation, philosophical reflection, religious experience, psychological construction, and artistic representation. Each perspective offers valuable insights, and together they contribute to a more comprehensive understanding of what reality entails.

I want to post here some considerations about reality as the target of our point of view and its implications.

Point of view, no matter what is said about objectivity, there is no way to escape it.

Subjectivity refers to the ways in which personal perspectives, feelings, beliefs, and desires influence an individual’s understanding and interpretation of the world. It contrasts with objectivity, which aims to present an unbiased and universal viewpoint.

Subjectivity encompasses personal perspectives, experiences, and biases that shape individual understanding and interpretation of the world. It is crucial for appreciating the diversity of human experience and for fostering empathy, ethical consideration, and critical thinking. Understanding subjectivity in different contexts provides a richer and more nuanced view of human cognition and culture.

I dare to say that it is impossible for a human being to deal with objectivity completely as it is supposed to be and the effects of that when dealing with reality is an aspect seldom discussed and not totally understood.

Objectivity refers to the viewpoint that aims to remove personal biases, emotions, and subjective interpretations, striving to present an unbiased and universally valid understanding of reality.

From a human perspective, other viewpoints besides subjectivity include objectivity, intersubjectivity, and perspectivism. Each offers a unique way of understanding and interpreting the world.

To my quest to the impossibility for a human being to deal with objectivity as it is supposed to be I should appeal to the concept of Perspectivism, which is the idea that knowledge and truth are always from a particular perspective and that no single perspective can claim absolute or objective truth.

The concession I make is Intersubjectivity, which refers to shared understanding and meaning that arise from interactions between individuals. It emphasizes the relational aspect of experience and knowledge.

It is the back bone of what is considered truthful and correct behind scientific investigation, philosophical reflection, religious experience, psychological construction, and artistic representation.

And it is more a product of ourselves as we are than the object which is in discussion.

An evidence of that is that objectivity, intersubjectivity, and perspectivism are versatile concepts that extend beyond scientific, philosophical, and psychological domains into fields such as legal studies, anthropology, political science, art, and communication, especially linguistics. Each field utilizes these concepts to explore and understand complex phenomena from multiple angles and perspectives.

How far off reality escapes our limits of perception?

Chat GPT gives us an excellent summary to bring that discussion to focus:

Sensory Limitations

1. Visual Spectrum:

  • Human Vision: Humans can perceive light wavelengths approximately between 380 and 750 nanometers, which constitutes the visible spectrum. However, a vast array of electromagnetic radiation exists outside this range, including ultraviolet, infrared, X-rays, and radio waves, which we cannot see without technological assistance.
  • Sources: National Center for Biotechnology Information – The Human Eye

2. Auditory Range:

  • Human Hearing: Humans can typically hear sounds in the frequency range of 20 Hz to 20 kHz. Many animals can hear frequencies outside this range (e.g., dogs can hear up to 45 kHz, and bats use echolocation in the range of 20 kHz to 200 kHz).
  • Sources: American Speech-Language-Hearing Association – Hearing Loss

3. Sensory Perception:

  • Touch, Smell, Taste: Our senses of touch, smell, and taste are also limited by the types of receptors we have and their sensitivity. Many chemicals and tactile sensations exist beyond our detection thresholds.
  • Sources: Harvard University – The Senses

Cognitive and Conceptual Limits

1. Cognitive Biases:

  • Bias and Perception: Human perception is influenced by cognitive biases that can distort our understanding of reality. These include confirmation bias, where we favor information that confirms our preconceptions, and inattentional blindness, where we fail to notice unexpected objects in our visual field.
  • Sources: American Psychological Association – Cognitive Biases

2. The Hard Problem of Consciousness:

  • Conscious Experience: The “hard problem” of consciousness refers to the difficulty of explaining why and how subjective experiences (qualia) arise from physical processes in the brain. This problem highlights a significant gap in our understanding of reality as experienced.
  • Sources: Stanford Encyclopedia of Philosophy – Consciousness

Scientific Theories and Technological Advances

1. Quantum Mechanics:

  • Quantum Reality: Quantum mechanics reveals a reality at the subatomic level that is fundamentally different from our everyday experiences. Phenomena such as wave-particle duality, superposition, and entanglement challenge our classical intuitions.
  • Sources: Nature – Quantum Mechanics

2. Relativity:

  • Spacetime and Gravity: Einstein’s theories of special and general relativity describe how time and space are interconnected and how gravity affects them. These theories have been confirmed by numerous experiments but present a view of reality that is non-intuitive compared to Newtonian physics.
  • Sources: NASA – Relativity

3. Technological Extensions:

  • Enhanced Perception: Technologies such as microscopes, telescopes, MRI scanners, and particle accelerators extend our perception far beyond natural limits, revealing details of the micro and macro universe that were previously inaccessible.
  • Sources: National Institutes of Health – Imaging Techniques

Philosophical Considerations

1. Kantian Perspective:

2. Constructivism:

  • Constructed Reality: Philosophers and cognitive scientists propose that much of what we perceive as reality is a mental construct, influenced by our sensory inputs, cognitive processes, and cultural background.
  • Sources: Encyclopedia of Philosophy – Constructivism

Conclusion

Reality extends far beyond the limits of human perception, constrained by our sensory and cognitive capacities. Scientific advancements and philosophical inquiries continue to push these boundaries, revealing a more complex and often counterintuitive picture of the universe. Understanding these limitations is crucial for expanding our knowledge and developing new ways to perceive and interact with the world.

For further exploration:

I criticized Chat GPT for the lack of the religious approach and it came up with that:

Religious Perspectives on Perception and Reality

1. Christianity

Mystical Experience:

  • Nature of God: In Christianity, God is often described as transcendent and beyond human understanding. Mystical experiences are sometimes seen as moments when individuals come closer to perceiving divine reality, but even then, the fullness of God remains beyond human comprehension.
  • Via Negativa (Apophatic Theology): This approach, also known as negative theology, posits that God can only be described by what He is not, emphasizing the limits of human language and perception in capturing the divine essence.
  • Sources: Mystical Theology, The Cloud of Unknowing

2. Islam

Tawhid and the Unseen:

  • Unity of God (Tawhid): Islam emphasizes the oneness and uniqueness of God, which is beyond human perception and comprehension. The concept of Tawhid asserts that God’s essence and actions are beyond human understanding.
  • The Unseen (Al-Ghaib): The Quran mentions “the Unseen,” referring to aspects of reality that are beyond human perception and knowledge, including the divine realm, angels, and the hereafter.
  • Sources: Islamic Philosophy – The Unseen, The Quran

3. Hinduism

Maya and Brahman:

  • Maya: Hinduism describes the concept of Maya, which is the illusionary nature of the material world. According to this view, the reality we perceive with our senses is not the ultimate reality.
  • Brahman: The ultimate reality in Hinduism is Brahman, an infinite, unchanging, and transcendent entity that is beyond human perception. The realization of Brahman is considered the highest knowledge and spiritual goal.
  • Sources: Upanishads, Advaita Vedanta

4. Buddhism

Emptiness (Śūnyatā) and Nirvana:

  • Śūnyatā: In Mahayana Buddhism, the concept of Śūnyatā, or emptiness, teaches that all phenomena are devoid of intrinsic existence. This understanding challenges conventional perceptions of reality and highlights the interconnectedness and impermanence of all things.
  • Nirvana: The ultimate goal in Buddhism, Nirvana, represents a state beyond suffering and individual existence. It is often described as the realization of the true nature of reality, which transcends ordinary perception and conceptualization.
  • Sources: The Heart Sutra, Theravada and Mahayana Perspectives

5. Jewish Mysticism (Kabbalah)

Ein Sof and the Sefirot:

  • Ein Sof: In Kabbalah, Ein Sof refers to the infinite and unknowable aspect of God. It is beyond human comprehension and represents the boundless nature of the divine.
  • Sefirot: The Sefirot are emanations through which Ein Sof interacts with the world. They provide a framework for understanding divine attributes and the process of creation, but the true essence of God remains beyond human perception.
  • Sources: Kabbalah and Jewish Mysticism, The Zohar

Conclusion

Religious perspectives offer profound insights into the nature of reality and the limits of human perception. They often emphasize the transcendence of the divine and the limitations of human understanding, suggesting that true reality is beyond what can be perceived through the senses or comprehended by the mind. These views complement scientific and philosophical approaches by providing a more holistic understanding of the human quest to grasp the nature of existence.

For further exploration:

I criticized Chat GPT even further asking wheter wouldn’t be any other approaches and it came up with:

Artistic, literary, psychological and cultural perspectives

There are additional approaches to understanding the problem of the limits of human perception and reality. These include artistic, literary, psychological, and cultural perspectives. Each offers unique insights into how we perceive and conceptualize the world.

Artistic Approaches

Artistic approaches here refer to paintings.
Painting has been a fundamental aspect of human culture and expression since the dawn of mankind. The evidence from prehistoric cave and rock art around the world demonstrates that early humans used painting to communicate, tell stories, and express themselves artistically about the reality they lived in.
Painting initially flourished with religious themes and established itself more broadly in society, especially after the Renaissance.
For a more detailed discussion, please take a look separately at Artistic Styles of Paintings.

For now we will privilege:

1. Visual Arts:

  • Surrealism: Surrealist artists like Salvador Dalí and René Magritte explore the boundaries of reality and perception by creating dream-like scenes that challenge our understanding of the world. Their work often blurs the line between reality and imagination, inviting viewers to question their own perceptions.
  • Source: Museum of Modern Art – Surrealism

2. Abstract Art:

  • Abstract Expressionism: Artists like Jackson Pollock and Mark Rothko use abstract forms to evoke emotions and ideas beyond the concrete, suggesting that reality includes not just what is seen but also what is felt.
  • Source: Tate – Abstract Expressionism

3.Realism

  • Realism, and particularly American Realism, focuses on the truthful, detailed representation of ordinary life and society. It emphasizes the everyday experiences of people and often includes a social or political commentary, reflecting the realities of the world without idealization. This movement has had a profound impact on the development of art, influencing many subsequent styles and continuing to resonate in contemporary art.
  • The name of the style suggest “reality” and I will analyse separately emphasizing the relationship of what they painted with reality two of the great artists which belong to this school and devoted their art to the american scene: Edward Hopper and Norman Rockwell

Literary Approaches

Point of view in literature

Styles are also known as genres and a list of them is:

Narrative: This style focuses on telling a story, often involving characters, a plot, and a setting. It can be found in novels, short stories, and epic poetry.

Descriptive: Descriptive writing aims to paint a picture with words, using detailed observations and sensory details to create vivid imagery. This style is often used in poetry and descriptive passages in prose.

Expository: Expository writing seeks to inform, explain, or describe a topic. It is clear, concise, and structured, commonly found in essays, articles, and textbooks.

Persuasive: Persuasive writing aims to convince the reader of a particular viewpoint or to take a specific action. This style uses arguments, evidence, and rhetorical devices, often found in speeches, essays, and opinion pieces.

Reflective: Reflective writing involves the writer’s personal thoughts, feelings, and reflections on a subject. It is often introspective and can be found in journals, memoirs, and personal essays.

Poetic: Poetic style emphasizes the aesthetic qualities of language, such as rhythm, meter, and imagery. This style is prevalent in poetry but can also appear in lyrical prose.

Satirical: Satirical writing uses humor, irony, and exaggeration to criticize or poke fun at individuals, institutions, or societal norms. This style is often found in essays, novels, and plays.

Stream of Consciousness: This style attempts to capture the flow of a character’s thoughts and feelings in a continuous, unstructured manner. It is often found in modernist literature.

Minimalist: Minimalist writing is characterized by its simplicity and brevity. It uses concise language and often leaves much to the reader’s interpretation. This style is commonly found in contemporary fiction and poetry.

Gothic: Gothic style features dark, mysterious, and supernatural elements, often exploring themes of horror and romance. This style is prevalent in 18th and 19th-century literature.

Realist: Realist writing aims to depict life accurately and truthfully, focusing on everyday experiences and characters. This style emerged in the 19th century and continues to influence modern literature.

Magical Realism: Magical realism blends realistic narrative with fantastical elements, presenting extraordinary events as part of everyday life. This style is prominent in Latin American literature

Although all of them depicts reality, the Realist and the Magical Realism are more contextualized in our discussion:

1.Realist

  • Perhaps the first realist author was Dante Alighieri with his Divine Comedy. James Joyce did a lot of experiments dealing with reality. Both were concernced with everyday life. I did a blog on both of them which can be see pressing above. I will quote myself and discuss specifically reality in both authors at: Dante, James Joyce and Reality

2. Magical Realism:

3. Science Fiction:

Psychological Approaches

1. Cognitive Science:

  • Perception and Reality: Cognitive science studies how the brain processes information and constructs reality. Research on optical illusions, for example, shows how our perception can be deceived, highlighting the difference between appearance and reality.
  • Source: American Psychological Association – Optical Illusions

2. Depth Psychology:

Cultural Approaches

1. Indigenous Knowledge Systems:

2. Eastern Philosophies:

  • Zen Buddhism: Zen practices emphasize direct experience and mindfulness, suggesting that true understanding comes from experiencing reality as it is, beyond conceptual thinking.
  • Source: Zen Mind, Beginner’s Mind by Shunryu Suzuki

Technological Approaches

1. Virtual Reality (VR):

  • Expanding Perception: VR technology creates immersive environments that extend human perception beyond natural limits, offering new ways to experience and understand different aspects of reality.
  • Source: Scientific American – Virtual Reality

2. Augmented Reality (AR):

Conclusion

Understanding the limits of human perception and the nature of reality can be approached from various perspectives beyond just scientific, philosophical, and religious views. Artistic, literary, psychological, cultural, and technological approaches all provide valuable insights into how we perceive and interpret the world, highlighting the multifaceted nature of reality and the diverse ways humans seek to understand it.

For further exploration:

Then, wrapping it up, I argued with Chat GPT that at the end of the day reality is not graspeable to humans and a work in progress kind of thing, to which it came up with the following:

Yes, the nature of reality can be considered elusive and a work in progress when it comes to human understanding. This perspective is supported by various disciplines, including science, philosophy, religion, and more. Here’s how different approaches contribute to this view:

Scientific Perspective

1. Uncertainty and Quantum Mechanics:

  • Heisenberg Uncertainty Principle: This principle states that certain pairs of physical properties, like position and momentum, cannot be simultaneously known to arbitrary precision. This implies fundamental limits to what can be known about the behavior of particles.
  • Quantum Mechanics: The nature of particles and waves, entanglement, and superposition all challenge classical notions of reality, suggesting that at a fundamental level, reality is probabilistic and not fully determinable.
  • Sources: Nature – Quantum Mechanics

2. Relativity:

  • Theory of Relativity: Einstein’s theories of special and general relativity show that measurements of time and space are relative to the observer’s state of motion, challenging the idea of an absolute, objective reality.
  • Sources: NASA – Relativity

Philosophical Perspective

1. Phenomenology:

  • Edmund Husserl and Martin Heidegger: These philosophers argue that reality is experienced subjectively, and our perceptions and experiences shape our understanding of it. The “phenomenon” is what appears in our consciousness, not necessarily the objective reality.
  • Sources: Stanford Encyclopedia of Philosophy – Phenomenology

2. Existentialism:

  • Jean-Paul Sartre and Albert Camus: Existentialist philosophers emphasize the individual’s experience and the subjective nature of reality. They argue that meaning and essence are not inherent in the world but are created by individuals.
  • Sources: Stanford Encyclopedia of Philosophy – Existentialism

Religious Perspective

1. Mystical Experience:

  • Mysticism: Various religious traditions suggest that ultimate reality is beyond human comprehension and can only be approached through mystical experiences, which transcend ordinary perception and understanding.
  • Sources: Mystical Theology

2. Transcendence:

  • Religious Views: Many religions, including Christianity, Islam, Hinduism, and Buddhism, propose that true reality or the divine is transcendent and not fully accessible to human reason and perception.
  • Sources: The Upanishads, The Quran

Cultural and Artistic Perspective

1. Surrealism and Abstract Art:

  • Art Movements: Movements like surrealism and abstract art explore the boundaries of reality and imagination, suggesting that what we perceive as reality is just one layer of a deeper, more complex existence.
  • Sources: MoMA – Surrealism

2. Literature and Science Fiction:

Conclusion

The consensus across these diverse fields is that reality is complex, multifaceted, and often beyond the full grasp of human perception. Scientific discoveries continuously challenge our understanding, philosophical inquiry exposes the limits of our knowledge, religious mysticism points to the ineffable, and cultural and artistic expressions highlight the subjective nature of experience. Reality, in this sense, is indeed a work in progress, constantly being redefined and understood in new ways.

For further reading:

Last, but not least, prompted by the interactions I have had with my friend Dr. Gary Stillwell, who very aptly noted that the feeling this whole line of argument gave him was that it was a kind of via-negativa thinking, which I add as a framework so we can return to our discussion about MU and the Japanese filmmaker Ozu Yasujiro.

To me this whole discussion has a lot to do with integrating oposites and solve paradoxes, which it seems to me that via negativa is a good way to face it and I asked Chat GPT if via negativa is not a way to integrate oposites and solve paradoxes and it came up with the following:

Via Negativa, also known as apophatic theology, is a way of describing what something is by stating what it is not. While it isn’t specifically designed to integrate opposites or solve paradoxes, it can help in addressing some philosophical and theological paradoxes by emphasizing the limitations of human language and understanding in describing complex or divine realities. Here’s how Via Negativa works and its implications for dealing with opposites and paradoxes:

Understanding Via Negativa

1. Definition:

2. Purpose:

Integrating Opposites

1. Beyond Binary Thinking:

  • Transcendence of Opposites: Via Negativa can help integrate opposites by moving beyond binary thinking. By denying both positive and negative attributes, it encourages a view that transcends simple dichotomies.
  • Example: Instead of saying God is good (positive) or not evil (negative), Via Negativa would focus on the ineffable nature of God, which transcends human moral categories.

2. Addressing Paradoxes:

  • Embracing Mystery: By accepting that some aspects of reality are beyond human comprehension, Via Negativa can help address paradoxes by acknowledging that human reason has its limits. This approach can diffuse the tension inherent in paradoxical statements by suggesting that the truth lies beyond what can be articulated.
  • Example: The concept of the Trinity in Christianity (one God in three persons) can be approached through Via Negativa by emphasizing what the Trinity is not, rather than trying to explain it in positive, human terms.

Solving Paradoxes

1. Mystical Approach:

  • Mysticism: Mystical traditions often use Via Negativa to solve paradoxes by suggesting that true understanding comes through direct, often non-verbal experience rather than logical analysis.
  • Example: In Zen Buddhism, koans (paradoxical statements or questions) are used to transcend ordinary logical thinking and reach a direct, experiential insight.

2. Philosophical Implications:

  • Transcendence of Language: In philosophy, Via Negativa can suggest that paradoxes arise from the limitations of language and that a deeper truth may be understood by transcending these limitations.
  • Example: Wittgenstein’s later philosophy often implies that philosophical problems arise from misunderstandings of language, and the solution involves recognizing the limits of what can be said.

Conclusion

Via Negativa is a valuable approach in theology and philosophy for addressing the limitations of human language and understanding. While it may not directly integrate opposites or solve paradoxes, it helps by transcending binary thinking and acknowledging the mysteries that lie beyond human comprehension. This approach can provide a framework for understanding complex or paradoxical concepts by focusing on what cannot be said, thereby opening the door to deeper, non-verbal insights.

For further exploration:

Principles of Via Negativa Applied to Reality

  1. Reality is Not Fully Comprehensible:
  2. Reality is Not Static:
    • Dynamic and Changing: Reality is not a fixed or static entity. It is constantly in flux, evolving, and changing. This negates any notion of reality as an unchanging, eternal state.
    • SourcesHeraclitus on Change
  3. Reality is Not Subject to Dualities:
    • Beyond Dualism: Reality is not confined to simple dualities such as good/evil, true/false, or subject/object. These binary distinctions do not capture the complexity and interconnectedness of reality.
    • SourcesBuddhist Philosophy on Non-Duality
  4. Reality is Not Merely Material:
    • Transcends Materialism: Reality is not limited to the material or physical world. It encompasses more than what can be perceived through the senses or measured by science.
    • SourcesThe Mind-Body Problem
  5. Reality is Not Fully Expressible:
    • Inexpressibility: Reality cannot be fully captured or expressed through language. Words and symbols are inadequate to convey the entirety of what reality is.
    • SourcesWittgenstein on the Limits of Language

Examples and Interpretations

  1. Mystical Traditions:
    • In mystical traditions, such as Zen Buddhism and certain strands of Christian mysticism, reality is often approached through silence, meditation, and direct experience rather than conceptualization. The emphasis is on experiencing reality directly rather than defining it.
    • SourcesMysticism in Comparative Religion
  2. Philosophical Skepticism:
    • Philosophical skepticism suggests that we cannot have absolute knowledge of reality. Instead, we should focus on what we can doubt and what lies beyond our understanding.
    • SourcesSkepticism in Philosophy
  3. Quantum Mechanics:
    • Quantum mechanics demonstrates that at a fundamental level, reality behaves in ways that defy classical intuition. Particles can exist in superpositions, and their properties are not determined until measured, suggesting that reality is not what classical physics describes.
    • SourcesQuantum Mechanics Overview

Conclusion

Via Negativa provides a method for approaching the concept of reality by focusing on what it is not. This approach acknowledges the limitations of human understanding, language, and perception. By stripping away inadequate and misleading descriptions, Via Negativa can lead to a more profound and humble appreciation of the complexity and mystery of reality.

For further exploration:


So much for reality… Let´s go back to to our discussion about MU and the Japanese filmmaker Ozu Yasujiro.

Emergence, Dasein, To be or not to be and Material Constitution

The title of this post encompasses four takes on one aspect of “being” that to me are related, and the purpose of this post is that hopefully will help to understand what is at stake.

It is very important to realize that all these takes are points of view. The aim of point of view varies with each context, but generally, it is about providing a specific perspective from which a story, argument, or observation is made or understood. They all collide head on with reality, which I post separately

What sparked my idea was the discussion about why computers do not think and the discussion was under “What is consciousness“, specially “the hard problem”.

Perhaps through a rather long kind of introduction, examining the two most spread approaches on “being”, i.e. scholasticism and humanism, which will be detailed and the kind of shake down Heidegger did to them with his approach, will work as a frame to understand how emergence, shakespeare and material constitution has to do with it.

The discussion of “being”, in that post, (“What is consciousness“, specially “the hard problem”) is done from the point of view of our brain or what makes it possible to happen physically and here I want to add how this is discussed and considered from the point of view of, how do I say it, psychologically, or rather intellectually, under several schools of thought. I will privilege it philosophically or under the most commonly accepted philosophers who dedicated themselves to that.

Heidegger will be the philosophical reference and Encyclopaedia Britannica tells us that his groundbreaking work in ontology (the philosophical study of being, or existence) and metaphysics determined the course of 20th-century philosophy on the European continent and exerted an enormous influence on virtually every other humanistic discipline, including literary criticismhermeneuticspsychology, and theology.

Heidegger’s philosophy presents a significant shift from previous philosophical traditions. He critiques and reinterprets the ideas of Descartes, Kant, Hegel, Nietzsche, Husserl, and Aristotle, among others, to develop a new understanding of being. Heidegger’s focus on Dasein as “being-in-the-world,” his critique of traditional metaphysics, and his emphasis on existential and temporal aspects of human life represent a radical departure from classical and modern philosophical frameworks.

Heidegger’s Concept of Dasein

Dasein, a key concept in Martin Heidegger’s philosophy, is central to his magnum opus, “Being and Time” (Sein und Zeit). Heidegger uses Dasein to refer to the unique mode of being that characterizes human existence. Here’s a breakdown of what Heidegger meant by Dasein:

Key Aspects of Dasein

  1. Being-there:
    • The term Dasein is a German word that translates roughly to “being-there” or “existence.” Heidegger chose this term to emphasize that human beings are not just present in the world as objects among other objects but have a unique way of being that involves awareness and engagement with their surroundings.
    • Dasein is distinguished by its capacity to reflect on its own existence and the nature of being itself.
    • Sources: Stanford Encyclopedia of Philosophy – Heidegger, Internet Encyclopedia of Philosophy – Heidegger
  2. Existential Structure:
    • Dasein is not a static entity but is characterized by its potentialities and possibilities. It is always in a state of “being-ahead-of-itself,” constantly projecting itself into the future and shaping its own existence through choices and actions.
    • This notion contrasts with traditional metaphysical views that see existence as a static state or predefined essence.
    • Sources: Encyclopaedia Britannica – Dasein, Heidegger’s “Being and Time”
  3. Being-in-the-world:
  4. Authenticity and Inauthenticity:
    • Heidegger explores how Dasein can exist authentically or inauthentically. Authenticity involves recognizing and embracing one’s own unique potential and living in accordance with one’s true self.
    • In contrast, inauthenticity involves conforming to the expectations and norms of others, losing one’s individuality in the process.
    • This dichotomy highlights the importance of personal responsibility and the pursuit of a genuine and meaningful existence.
    • Sources: Stanford Encyclopedia of Philosophy – Authenticity, Routledge Encyclopedia of Philosophy – Heidegger
  5. Being-toward-death:
    • Heidegger argues that awareness of death is a fundamental aspect of Dasein. Recognizing the inevitability of death helps Dasein understand the finite nature of existence and motivates authentic living.
    • This concept of “being-toward-death” (Sein-zum-Tode) encourages individuals to confront their mortality and live in a way that reflects their true values and aspirations.
    • Sources: Heidegger’s “Being and Time”, Internet Encyclopedia of Philosophy – Being-toward-Death

Summary

Heidegger’s concept of Dasein represents a fundamental shift in thinking about human existence. It emphasizes the uniqueness of human beings as entities that are inherently aware of and capable of reflecting on their own existence. Dasein’s nature is characterized by its possibilities, its embeddedness in the world, and its constant engagement with the question of what it means to exist authentically. This concept has had a profound impact on existential philosophy and continues to influence contemporary thought on human existence.

Key Philosophers Heidegger Engages With

Martin Heidegger’s philosophy, particularly as presented in “Being and Time,” critiques and diverges from the ideas of several key philosophers, proposing a new way of thinking about existence, being, and human nature. Here’s an analysis of the philosophers whose ideas Heidegger challenges or seeks to replace:

  1. René Descartes:
    • Dualism and Subjectivity: Descartes is known for his dualistic approach, separating mind and body and emphasizing the cogito (“I think, therefore I am”) as the foundation of knowledge. Heidegger challenges this separation, arguing that being cannot be understood merely as a thinking subject separate from the world. Instead, he proposes the concept of Dasein as “being-in-the-world,” where existence is characterized by its interactions and relationships with the surrounding environment​ .
    • Objectification of Being: Descartes’ view treats being as an object of scientific study, something that can be dissected and understood through rational thought. Heidegger opposes this, suggesting that such an approach overlooks the fundamental question of what it means to be​
  2. Immanuel Kant:
    • Epistemology and Transcendental Idealism: Kant’s philosophy focuses on how we can know things and the structures that underlie our perception and understanding of the world. Heidegger critiques Kant for reducing being to the structures of human cognition, thereby neglecting the deeper, more fundamental aspects of existence . Heidegger’s ontological focus attempts to go beyond Kantian epistemology to explore the nature of being itself.
    • Time and Temporality: Kant treats time as a mere condition for human experience. Heidegger, on the other hand, emphasizes the existential significance of time, proposing that understanding our own temporality is crucial for grasping the essence of being .
  3. G.W.F. Hegel:
    • Absolute Idealism: Hegel’s philosophy presents a dialectical process where reality is seen as a development towards an absolute, rational self-consciousness. Heidegger critiques Hegel’s abstraction and his concept of a totalizing Absolute, arguing that it overlooks the concrete, everyday experience of being . Heidegger focuses on individual existence and the lived experience rather than a grand historical process.
    • Historical Determinism: While Hegel emphasizes the unfolding of spirit through historical processes, Heidegger rejects the notion that history progresses towards a specific end. For Heidegger,history is not a deterministic path but a series of open-ended possibilities for Dasein.
  4. Friedrich Nietzsche:
    • Nihilism and the Will to Power: Nietzsche’s critique of traditional metaphysics and his concept of the will to power significantly influence Heidegger. However, Heidegger believes Nietzsche’s approach ultimately falls into the same metaphysical trap by replacing a transcendent being with a focus on power dynamics. Heidegger seeks to move beyond Nietzsche’s nihilism by rethinking the question of being itself, without reducing it to human will or power .
    • Overcoming Metaphysics: Heidegger shares Nietzsche’s desire to overcome traditional metaphysics, but he does so by reinterpreting the meaning of being rather than abandoning the concept of being entirely as Nietzsche suggests .
  5. Edmund Husserl:
    • Phenomenology and Intentionality: As the founder of phenomenology, Husserl emphasizes the intentional structure of consciousness and its role in constituting meaning. Heidegger diverges from Husserl by arguing that phenomenology should focus not just on consciousness but on the structures of being itself. He develops hermeneutic phenomenology, which interprets the meaning of being in the context of human existence rather than purely in terms of consciousness and intentionality .
    • Reductionism: Husserl’s method involves bracketing or suspending the natural attitude to focus purely on consciousness. Heidegger argues that this approach is too abstract and fails to account for the existential realities of human life. Heidegger’s approach seeks to uncover the pre-theoretical conditions of being .
  6. Aristotle:
    • Being as Presence: Aristotle’s metaphysics views being primarily in terms of substance and presence. Heidegger respects Aristotle but critiques his focus on being as something that is present-at-hand, arguing instead for a more dynamic understanding of being that encompasses potentiality and temporality . Heidegger seeks to revive a pre-Socratic sense of being that is not confined to static categories.
    • Ontological Difference: Heidegger develops the concept of the ontological difference, distinguishing between being (Sein) and beings (Seiende), which he believes Aristotle did not fully articulate .

Conclusion

Heidegger’s philosophy presents a significant shift from previous philosophical traditions. He critiques and reinterprets the ideas of Descartes, Kant, Hegel, Nietzsche, Husserl, and Aristotle, among others, to develop a new understanding of being. Heidegger’s focus on Dasein as “being-in-the-world,” his critique of traditional metaphysics, and his emphasis on existential and temporal aspects of human life represent a radical departure from classical and modern philosophical frameworks.

Heidegger’s Influence on Existentialism

Martin Heidegger is widely recognized as a key precursor to existentialism, although he himself did not align strictly with the existentialist label. His philosophical ideas, especially as articulated in “Being and Time” (Sein und Zeit), had a profound influence on the existentialist movement and its central themes. Here’s how Heidegger’s work laid the groundwork for existentialism:

Core Contributions to Existentialism

  1. Focus on Existence and Being:
    • Existence Precedes Essence: Heidegger’s exploration of Dasein, or “being-there,” emphasizes the primacy of existence over essence, a theme that became central to existentialism. Existentialists argue that individuals must create their own meaning and essence through their actions and choices.
    • Heidegger’s view that human beings are defined not by a predetermined essence but by their potential to define themselves through choices and actions resonates with existentialist themes.
    • Sources: Stanford Encyclopedia of Philosophy – Existentialism, Internet Encyclopedia of Philosophy – Existentialism
  2. Authenticity and Inauthenticity:
    • Heidegger’s distinction between authentic and inauthentic existence influenced existentialists like Jean-Paul Sartre and Albert Camus. Authenticity involves embracing one’s freedom and potential, while inauthenticity involves conforming to societal norms and expectations.
    • This concept emphasizes the importance of individual responsibility and the need to live a life that is true to oneself, free from external impositions.
    • Sources: Routledge Encyclopedia of Philosophy – Authenticity, Encyclopaedia Britannica – Heidegger
  3. Being-in-the-World:
    • Heidegger’s notion of Being-in-the-world (In-der-Welt-sein) emphasizes that human existence is fundamentally relational and embedded in a context of interactions with others and the environment. This idea challenges the Cartesian separation of mind and body and underscores the interconnectedness of individual and world, a theme explored deeply in existentialist philosophy.
    • Existentialists, especially Sartre, expand on this idea to explore how individuals define themselves through their interactions with the world and others.
    • Sources: Stanford Encyclopedia of Philosophy – Heidegger’s Works, Cambridge University Press – Being-in-the-World
  4. Being-toward-Death:
    • Heidegger’s concept of Being-toward-death (Sein-zum-Tode) asserts that awareness of mortality is crucial for authentic existence. This notion influenced existentialist themes of finitude, freedom, and the urgency of living a meaningful life in the face of inevitable death.
    • Existentialists like Heidegger argue that confronting mortality leads to a deeper understanding of life and a more genuine approach to existence.
    • Sources: Heidegger’s “Being and Time”, Internet Encyclopedia of Philosophy – Being-toward-Death

Influence on Key Existentialist Thinkers

  1. Jean-Paul Sartre:
    • Sartre’s existentialism, particularly in works like “Being and Nothingness” (L’être et le néant), draws heavily on Heidegger’s ideas. Sartre’s concept of “being-for-itself” and the emphasis on human freedom and responsibility are directly influenced by Heidegger’s Dasein and authenticity.
    • Sartre expands on Heidegger’s ideas by focusing on the radical freedom of individuals to define their own existence and the burden of responsibility that comes with this freedom.
    • Sources: Stanford Encyclopedia of Philosophy – Sartre, Internet Encyclopedia of Philosophy – Sartre
  2. Simone de Beauvoir:
    De Beauvoir’s work, including “The Second Sex” (Le Deuxième Sexe), reflects Heidegger’s influence, particularly in her exploration of the lived experience and the dynamics of freedom and oppression.
    She applies existentialist concepts to issues of gender and identity, examining how societal structures influence individual existence and freedom.
    Sources: Encyclopedia Britannica – Simone de Beauvoir, Stanford Encyclopedia of Philosophy – Beauvoir
  3. Albert Camus:
    Although Camus rejected the existentialist label, his work is often associated with existentialism. His focus on the absurd and the quest for meaning in a seemingly indifferent universe parallels Heidegger’s themes of existential anxiety and the search for authentic being.
    Camus’s concept of the “absurd hero” reflects a Heideggerian engagement with the existential conditions of human life.
    Sources: Stanford Encyclopedia of Philosophy – Camus, Internet Encyclopedia of Philosophy – Camus
    Heidegger’s Distinction from Existentialism
  4. Ontology vs. Existentialism:
    While existentialism focuses on individual existence and personal freedom, Heidegger’s work is more concerned with ontology, the study of being itself. He sought to uncover the fundamental structures of existence that underlie individual experiences.
    Heidegger distanced himself from existentialism, particularly from the more humanistic and individualistic interpretations of thinkers like Sartre.
    Sources: Encyclopaedia Britannica – Existentialism, Cambridge University Press – Heidegger and Existentialism
  5. Critique of Humanism:
    Heidegger criticized the humanism that underlies much of existentialist thought, arguing that it remains trapped in a metaphysical framework that fails to adequately address the question of being.
    He proposed a return to the pre-Socratic understanding of being that transcends human-centered perspectives.
    Sources: Stanford Encyclopedia of Philosophy – Heidegger and Humanism, Heidegger’s “Letter on Humanism”

Heidegger’s Distinction from Existentialism

  1. Ontology vs. Existentialism:
    While existentialism focuses on individual existence and personal freedom, Heidegger’s work is more concerned with ontology, the study of being itself. He sought to uncover the fundamental structures of existence that underlie individual experiences.
    Heidegger distanced himself from existentialism, particularly from the more humanistic and individualistic interpretations of thinkers like Sartre.
    Sources: Encyclopaedia Britannica – Existentialism, Cambridge University Press – Heidegger and Existentialism
  2. Critique of Humanism:
    Heidegger criticized the humanism that underlies much of existentialist thought, arguing that it remains trapped in a metaphysical framework that fails to adequately address the question of being.
    He proposed a return to the pre-Socratic understanding of being that transcends human-centered perspectives.
    Sources: Stanford Encyclopedia of Philosophy – Heidegger and Humanism, Heidegger’s “Letter on Humanism”

Conclusion

Heidegger’s ideas, particularly his concepts of Dasein, authenticity, and being-in-the-world, significantly influenced existentialist thought. His philosophical explorations of being and existence provided a foundational framework that existentialist thinkers expanded upon to explore themes of freedom, individuality, and the search for meaning in a complex and often indifferent world. While Heidegger himself did not identify with existentialism, his work remains a crucial precursor and influence on the movement.

Scholasticism and Humanism

Scholasticism and Humanism have played pivotal roles in shaping Western intellectual history. Scholasticism’s methodical approach to integrating faith and reason contrasts with Humanism’s celebration of human potential and classical learning. Understanding these movements helps illuminate the evolution of thought from the Middle Ages through the Renaissance and beyond.

Heidegger’s philosophy represents a “third way” by diverging from both scholasticism and humanism and introducing a new framework for understanding existence. His focus on existential phenomenology and the ontological question of Being provides a unique perspective that challenges the established traditions of his time.

Timeline of Scholasticism and Humanism

Both Scholasticism and Humanism represent critical intellectual movements in Western history, each associated with significant philosophical, theological, and cultural developments. Here’s a timeline detailing the key periods and events for each:

Scholasticism

1. Early Scholasticism (9th – 12th Century):

  • 9th Century: The Carolingian Renaissance saw the first inklings of Scholastic thought, as scholars such as John Scotus Eriugena began to integrate classical philosophy with Christian theology.
  • 11th Century: The establishment of medieval universities (e.g., University of Bologna) provided institutional support for Scholastic thought. Key figures like Anselm of Canterbury developed arguments for God’s existence, integrating reason with faith.

2. High Scholasticism (12th – 14th Century):

  • 12th Century: The works of Aristotle were reintroduced to Western Europe through translations from Arabic and Greek. Peter Abelard‘s use of dialectical reasoning laid the groundwork for later Scholastic methods.
  • 13th Century: The peak of Scholasticism with Thomas Aquinas, who synthesized Aristotelian philosophy with Christian doctrine in his “Summa Theologica” (c. 1265-1274). Aquinas’ work became a cornerstone of Scholastic thought.

3. Late Scholasticism (14th – 16th Century):

4. Decline and Influence (16th Century – Present):

  • 16th Century: The Protestant Reformation and the rise of Humanism challenged the dominance of Scholastic thought. However, it continued to influence Catholic education and theology, especially in institutions like the Jesuit colleges.
  • 20th Century: Neo-Scholasticism emerged, especially within Catholic intellectual circles, as a revival and modernization of Scholastic principles to address contemporary issues.

Humanism

1. Proto-Humanism and Early Developments (14th Century):

2. Italian Renaissance Humanism (15th Century):

3. Northern Renaissance and Reformation Humanism (16th Century):

4. Decline and Transformation (17th Century – Present):

  • 17th Century: The rise of the scientific revolution shifted intellectual focus away from classical humanism towards empirical science and rationalism.
  • 19th-20th Century: Humanism evolved into various forms, including secular humanism, which emphasizes reason, ethics, and justice while rejecting supernatural and religious beliefs as the basis for moral decision-making.

Key Differences in Their Timelines

  • Origins and Peak: Scholasticism originates in the early medieval period (9th century) and peaks in the 13th century with Thomas Aquinas. Humanism, however, emerges in the late medieval period (14th century) and peaks during the Renaissance (15th-16th centuries).
  • Decline and Legacy: Scholasticism declines with the advent of the Renaissance and the Reformation, while Humanism transitions into new forms such as the Enlightenment and secular humanism.

Conclusion

Scholasticism and Humanism mark two significant epochs in Western intellectual history. Scholasticism’s rigorous dialectical method sought to reconcile faith and reason during the medieval period. In contrast, Humanism’s focus on classical antiquity and human potential reshaped intellectual life during the Renaissance and beyond. Both movements have left a lasting impact on philosophy, education, and culture.

Key Philosophers Heidegger Engages With

Martin Heidegger’s philosophy, particularly as presented in “Being and Time,” critiques and diverges from the ideas of several key philosophers, proposing a new way of thinking about existence, being, and human nature. Here’s an analysis of the philosophers whose ideas Heidegger challenges or seeks to replace:

  1. René Descartes:
    • Dualism and Subjectivity: Descartes is known for his dualistic approach, separating mind and body and emphasizing the cogito (“I think, therefore I am”) as the foundation of knowledge. Heidegger challenges this separation, arguing that being cannot be understood merely as a thinking subject separate from the world. Instead, he proposes the concept of Dasein as “being-in-the-world,” where existence is characterized by its interactions and relationships with the surrounding environment​​ .
    • Objectification of Being: Descartes’ view treats being as an object of scientific study, something that can be dissected and understood through rational thought. Heidegger opposes this, suggesting that such an approach overlooks the fundamental question of what it means to be​.
  2. Immanuel Kant:
    • Epistemology and Transcendental Idealism: Kant’s philosophy focuses on how we can know things and the structures that underlie our perception and understanding of the world. Heidegger critiques Kant for reducing being to the structures of human cognition, thereby neglecting the deeper, more fundamental aspects of existence . Heidegger’s ontological focus attempts to go beyond Kantian epistemology to explore the nature of being itself.
    • Time and Temporality: Kant treats time as a mere condition for human experience. Heidegger, on the other hand, emphasizes the existential significance of time, proposing that understanding our own temporality is crucial for grasping the essence of being .
  3. G.W.F. Hegel:
    • Absolute Idealism: Hegel’s philosophy presents a dialectical process where reality is seen as a development towards an absolute, rational self-consciousness. Heidegger critiques Hegel’s abstraction and his concept of a totalizing Absolute, arguing that it overlooks the concrete, everyday experience of being . Heidegger focuses on individual existence and the lived experience rather than a grand historical process.
    • Historical Determinism: While Hegel emphasizes the unfolding of spirit through historical processes, Heidegger rejects the notion that history progresses towards a specific end. For Heidegger, history is not a deterministic path but a series of open-ended possibilities for Dasein .
  4. Friedrich Nietzsche:
    • Nihilism and the Will to Power: Nietzsche’s critique of traditional metaphysics and his concept of the will to power significantly influence Heidegger. However, Heidegger believes Nietzsche’s approach ultimately falls into the same metaphysical trap by replacing a transcendent being with a focus on power dynamics. Heidegger seeks to move beyond Nietzsche’s nihilism by rethinking the question of being itself, without reducing it to human will or power .
    • Overcoming Metaphysics: Heidegger shares Nietzsche’s desire to overcome traditional metaphysics, but he does so by reinterpreting the meaning of being rather than abandoning the concept of being entirely as Nietzsche suggests .
  5. Edmund Husserl:
    • Phenomenology and Intentionality: As the founder of phenomenology, Husserl emphasizes the intentional structure of consciousness and its role in constituting meaning. Heidegger diverges from Husserl by arguing that phenomenology should focus not just on consciousness but on the structures of being itself. He develops hermeneutic phenomenology, which interprets the meaning of being in the context of human existence rather than purely in terms of consciousness and intentionality .
    • Reductionism: Husserl’s method involves bracketing or suspending the natural attitude to focus purely on consciousness. Heidegger argues that this approach is too abstract and fails to account for the existential realities of human life. Heidegger’s approach seeks to uncover the pre-theoretical conditions of being .
  6. Aristotle:
    • Being as Presence: Aristotle’s metaphysics views being primarily in terms of substance and presence. Heidegger respects Aristotle but critiques his focus on being as something that is present-at-hand, arguing instead for a more dynamic understanding of being that encompasses potentiality and temporality . Heidegger seeks to revive a pre-Socratic sense of being that is not confined to static categories.
    • Ontological Difference: Heidegger develops the concept of the ontological difference, distinguishing between being (Sein) and beings (Seiende), which he believes Aristotle did not fully articulate .

Conclusion

Heidegger’s philosophy presents a significant shift from previous philosophical traditions. He critiques and reinterprets the ideas of Descartes, Kant, Hegel, Nietzsche, Husserl, and Aristotle, among others, to develop a new understanding of being. Heidegger’s focus on Dasein as “being-in-the-world,” his critique of traditional metaphysics, and his emphasis on existential and temporal aspects of human life represent a radical departure from classical and modern philosophical frameworks.

How to contextualize “The hard problem” in all that

Heidegger’s Ideas and Nagel’s Critique: A Philosophical Comparison

Thomas Nagel’s essay “What is it like to be a bat?” and its question about “The hard problem” raises important questions about subjective experience and the limits of objective knowledge. This critique can be applied to many philosophical approaches, including those of Heidegger and the philosophers he critiqued. Here’s an exploration of how Nagel’s ideas relate to Heidegger’s existential analysis and the broader philosophical landscape.

Nagel’s Critique of Subjective Experience

  1. Nagel’s Argument:
    • In “What is it like to be a bat?” Nagel argues that subjective experiences, or what he calls “qualia,” are inherently inaccessible to objective scientific analysis. He suggests that no matter how much we understand the physical aspects of a bat’s existence, we cannot grasp what it is like to be a bat from a first-person perspective.
    • This critique highlights the limitations of objective, third-person perspectives in capturing the full nature of subjective experience.
    • Sources: Nagel’s Essay on NYU
  2. Implications for Philosophy:
    • Nagel’s argument challenges reductionist approaches in philosophy and science that attempt to explain consciousness purely in terms of physical processes. He argues for the necessity of recognizing subjective experience as an essential part of reality that cannot be fully captured by objective descriptions.
    • This critique is particularly relevant to materialist and physicalist philosophies that seek to reduce all phenomena to physical explanations.
    • Sources: Internet Encyclopedia of Philosophy – Nagel, The Guardian – Thomas Nagel on Consciousness

Heidegger’s Philosophical Approach

  1. Heidegger’s Focus on Being:
    • Heidegger’s existential analysis in “Being and Time” (Sein und Zeit) focuses on the question of being and the unique nature of human existence (Dasein). Heidegger argues that traditional metaphysics and scientific approaches overlook the fundamental question of what it means to be.
    • Heidegger’s emphasis on Dasein and being-in-the-world underscores the importance of subjective experience and the lived reality of individuals.
    • Sources: Stanford Encyclopedia of Philosophy – Heidegger, Internet Encyclopedia of Philosophy – Heidegger
  2. Existential Authenticity:
    • Heidegger’s notion of authenticity involves recognizing one’s own potential and living in a way that is true to oneself, rather than conforming to external pressures or societal norms. This emphasis on personal experience and self-awareness aligns with Nagel’s focus on the subjective aspect of existence.
    • However, Heidegger’s approach is more concerned with the ontological conditions of existence rather than the specific qualitative experiences that Nagel discusses.
    • Sources: Encyclopaedia Britannica – Heidegger, Routledge Encyclopedia of Philosophy – Authenticity

Comparison with Philosophers Criticized by Heidegger

  1. Descartes and Kant:
    • Descartes: Heidegger criticized Descartes’ dualism for separating mind and body, leading to a view of being as a mere object among objects. Nagel’s critique also points to the limitations of understanding consciousness through purely objective frameworks, aligning with Heidegger’s emphasis on subjective experience.
    • Kant: Heidegger critiqued Kant for reducing being to cognitive structures, overlooking the existential and temporal dimensions of human existence. Nagel’s argument further challenges this reductionism by highlighting the essential nature of subjective experience that cannot be captured by cognitive or physical descriptions alone.
    • Sources: Stanford Encyclopedia of Philosophy – Descartes, Stanford Encyclopedia of Philosophy – Kant
  2. Hegel and Husserl:
    • Hegel: Heidegger critiqued Hegel for focusing on abstract, historical processes rather than concrete, lived experiences. Nagel’s emphasis on the irreducibility of subjective experience echoes Heidegger’s critique by underscoring the limitations of objective, historical narratives in capturing individual consciousness.
    • Husserl: While Heidegger builds on Husserl’s phenomenology, he departs from Husserl’s focus on intentional consciousness by emphasizing the pre-theoretical, existential aspects of being. Nagel’s critique can be seen as a further development of the phenomenological focus on lived experience, highlighting the limitations of purely intentional or cognitive approaches.
    • Sources: Internet Encyclopedia of Philosophy – Hegel, Stanford Encyclopedia of Philosophy – Husserl

Falling Short of Nagel’s Challenge

  1. Inaccessibility of Subjective Experience:
    • Both Heidegger and the philosophers he critiques may fall short of Nagel’s challenge by not fully addressing the problem of subjective experience. While Heidegger emphasizes the existential dimensions of being, he does not explicitly tackle the qualitative aspects of individual consciousness that Nagel highlights.
    • This suggests that any philosophical framework that attempts to understand human existence must account for the irreducible nature of subjective experience.
    • Sources: Thomas Nagel, Nagel’s Essay on NYU
  2. Limits of Objective Knowledge:
    • Heidegger’s critique of metaphysics and focus on existential ontology does address some of the limitations of objective knowledge. However, Nagel’s argument emphasizes that objective approaches cannot fully capture the subjective aspects of consciousness, a challenge that Heidegger’s framework does not fully resolve.
    • This highlights the ongoing tension between objective and subjective approaches in philosophy.
    • Sources: Internet Encyclopedia of Philosophy – Existentialism, The Guardian – Thomas Nagel on Consciousness

Conclusion

Thomas Nagel’s critique of subjective experience in “What is it like to be a bat?” presents a significant challenge to philosophical approaches that rely on objective or cognitive frameworks to understand consciousness. While Heidegger’s existential analysis and his critiques of other philosophers address some aspects of human existence, they may fall short of fully accounting for the qualitative, subjective nature of experience that Nagel emphasizes. This underscores the need for a comprehensive philosophical approach that integrates both objective and subjective dimensions of human life.

Modern philosophers and Thomas Nagel proposition

Thomas Nagel’s proposition in “What Is It Like to Be a Bat?” has sparked extensive debate and discussion among modern philosophers. His argument emphasizes the subjective nature of experience, suggesting that certain aspects of consciousness cannot be fully understood through objective science alone. Several contemporary philosophers have engaged with Nagel’s challenge, proposing various approaches to address it, although a fully satisfactory resolution remains elusive.

Key Modern Philosophical Responses

  1. David Chalmers:
    • The Hard Problem of Consciousness: Chalmers extends Nagel’s concerns by formulating the “hard problem” of consciousness, which distinguishes between easy problems (understanding cognitive functions) and the hard problem (explaining subjective experience or qualia). Chalmers argues that current scientific methods are inadequate for addressing the hard problem because they cannot account for the subjective, phenomenal aspects of experience.
    • Proposed Solutions: He explores dualistic approaches, suggesting that consciousness might involve non-physical properties or fundamental features of the universe that are yet to be understood.
    • Sources: Chalmers, “The Conscious Mind”, Stanford Encyclopedia of Philosophy – Chalmers
  2. Frank Jackson:
    • Knowledge Argument: In his famous thought experiment involving “Mary the color scientist,” Jackson argues that experiencing a phenomenon (such as seeing color) provides knowledge that cannot be gained through objective scientific knowledge alone. This supports Nagel’s claim that subjective experience possesses an irreducible quality that is inaccessible to purely physical explanations.
    • Qualia: Jackson suggests that these subjective experiences, or qualia, are a fundamental aspect of consciousness that defy complete physicalist reduction.
    • Sources: Jackson, “Epiphenomenal Qualia”, Internet Encyclopedia of Philosophy – Jackson
  3. John Searle:
    • Biological Naturalism: Searle proposes that consciousness is a biological phenomenon that emerges from the physical processes of the brain but is not reducible to them. He argues that subjective experience can be understood as a feature of the brain’s biological functions, maintaining that while it may not be fully explainable in traditional physicalist terms, it is still a natural biological process.
    • Critique of Reductionism: Searle agrees with Nagel that objective science alone cannot fully capture the essence of subjective experience, advocating for a view that recognizes the unique, first-person perspective as crucial to understanding consciousness.
    • Sources: Searle, “The Rediscovery of the Mind”, Stanford Encyclopedia of Philosophy – Searle
  4. Daniel Dennett:
    • Eliminative Materialism: Dennett challenges Nagel’s position by arguing that the notion of qualia and the subjective experience problem might be misconceived. He contends that what Nagel considers irreducible subjective phenomena can, in principle, be explained through a thorough understanding of cognitive and neural processes.
    • Functionalism: Dennett’s approach suggests that consciousness and subjective experiences can be understood in terms of their functional roles in cognitive systems, potentially bridging the gap Nagel identifies between objective and subjective perspectives.
    • Sources: Dennett, “Consciousness Explained”, Internet Encyclopedia of Philosophy – Dennett
  5. Thomas Metzinger:
    • Self-Model Theory: Metzinger proposes that consciousness and the sense of a subjective self are the result of a complex self-model generated by the brain. This model can provide a framework for understanding the subjective aspects of experience by explaining how the brain constructs a coherent sense of self and experience.
    • Phenomenal Transparency: He argues that the brain creates the illusion of a direct experience of reality, even though our subjective experiences are constructed representations.
    • Sources: Metzinger, “Being No One”, Stanford Encyclopedia of Philosophy – Metzinger
  6. Colin McGinn:
    • Mysterianism: McGinn suggests that human cognitive limitations prevent us from fully understanding consciousness. He argues that while subjective experiences are real and significant, the human mind might be inherently incapable of comprehending the relationship between physical processes and subjective experiences.
    • Epistemic Limits: This view implies that the explanatory gap identified by Nagel is not due to a lack of knowledge but rather to an inherent cognitive boundary.
    • Sources: McGinn, “The Mysterious Flame”, Internet Encyclopedia of Philosophy – McGinn

Summary and Ongoing Debates

While Nagel’s proposition remains a significant challenge to the physicalist understanding of consciousness, no single modern philosopher has completely resolved the issues he raises. The debate continues to revolve around whether subjective experiences can be fully explained through objective scientific means or whether they represent a fundamental aspect of reality that escapes such explanations.

Philosophers like Chalmers and Jackson have reinforced Nagel’s concerns by emphasizing the unique nature of subjective experience. Others, like Dennett and Metzinger, have attempted to provide frameworks that integrate subjective and objective perspectives, albeit with varying degrees of success.

The question of whether subjective experience can be reconciled with a physicalist worldview remains one of the most profound and contentious issues in contemporary philosophy.

To be or not to be

In “Being and Time” (Sein und Zeit), Martin Heidegger does not discuss his concepts through particular individuals or specific personal contexts. Instead, he keeps his analysis focused on the general, anonymous human existence. Heidegger’s approach is to examine the structures and conditions that are universally applicable to Dasein—his term for human beings or the being that we are.

Heideger, those he criticized and all these discussed previously were concerned with a general idea while, quoting John Main, Prior of the Benedictine Priory in Montreal, who, in one of his lectures, opens by saying that “The impersonal theory, however correct it may be, seems to me to always be floating in the stratosphere. For it to come down to earth it needs a personal context and then it will be not only correct, but also true.”

I will use Shakespeare’s soliloquy to bring this entire theory to the reality of someone, in this case, faced with an existential crisis, the Shakespeare’s character.

Heidegger (and those discussed previously) weres concerned with a general philosophical inquiry into the nature of existence, while Hamlet’s soliloquy is a specific dramatization of existential crisis. Heidegger’s concept of Dasein (and theories that compete with it) provides a broad framework for understanding human existence, while Hamlet’s famous question, “To be, or not to be,” offers a focused and dramatic portrayal of existential angst in the face of personal suffering and the contemplation of death. Here’s how these ideas align and differ: (I will concentrate on Dasein and will confront it with other theories separately)

Heidegger’s General Philosophical Inquiry

  1. Heidegger’s Concern with Being:
    • General Inquiry: Heidegger’s Being and Time (Sein und Zeit) seeks to understand the fundamental nature of being. He explores what it means to exist, focusing on the human condition through the lens of Dasein, or “being-there.” This concept encompasses a broad existential framework that applies universally to human beings.
    • Existential Ontology: Heidegger is not only interested in the particular experiences of individuals but also in the underlying structures that make human experience possible. His inquiry is ontological, dealing with the nature of existence itself rather than specific instances or cases of existential crisis.
    • Sources: Stanford Encyclopedia of Philosophy – Heidegger, Internet Encyclopedia of Philosophy – Heidegger
  2. Themes of Dasein:
    • Being-in-the-World: Heidegger’s concept of being-in-the-world emphasizes the interconnectedness of individuals with their environment and the inseparability of their existence from the world around them. This is a general condition that applies to all human beings.
    • Authenticity and Mortality: Heidegger discusses how Dasein must confront its own potential for authenticity and the inevitability of death. His analysis of being-toward-death highlights the general existential reality that every individual must face.
    • Sources: Encyclopaedia Britannica – Heidegger, Routledge Encyclopedia of Philosophy – Authenticity

Hamlet’s Specific Existential Crisis

  1. Hamlet’s Personal Struggle:
    • Individual Experience: Hamlet’s soliloquy, “To be, or not to be,” captures a specific moment of personal existential crisis. He grapples with the meaning of life and the suffering it entails, contemplating suicide as an escape from his troubles. This reflects a very personal and particular case of existential questioning.
    • Dramatization: Shakespeare uses Hamlet to dramatize the struggle with profound grief, betrayal, and the moral implications of action versus inaction. While the themes are universal, the context is uniquely Hamlet’s.
    • Sources: No Fear Shakespeare – Hamlet, Royal Shakespeare Company – Hamlet
  2. Existential Reflection:
    • Materialization of Existential Themes: Hamlet’s soliloquy serves as a concrete example of existential reflection. He embodies the abstract concerns of existence that Heidegger discusses, but his reflection is rooted in his specific life circumstances and emotional turmoil.
    • Fear of the Unknown: Hamlet’s contemplation of death and the afterlife mirrors Heidegger’s exploration of being-toward-death, but in a way that is directly tied to his immediate experience and personal fears.
    • Sources: SparkNotes – Hamlet Soliloquy, The British Library – Hamlet’s Soliloquy

Comparative Analysis

  1. General vs. Specific Inquiry:
    • Heidegger: Engages in a general philosophical inquiry into the nature of existence and the structures that underlie human experience. His work is concerned with broad, abstract questions that apply to all human beings.
    • Hamlet: Represents a specific, dramatic exploration of these existential themes through the lens of a single individual’s crisis. Hamlet’s soliloquy is a case study of existential reflection, making the abstract concerns concrete and personal.
    • Sources: Stanford Encyclopedia of Philosophy – Heidegger, CliffsNotes – Hamlet
  2. Philosophical and Dramatic Resonance:
    • Philosophical Resonance: Heidegger’s exploration of Dasein provides the philosophical foundation that resonates with the themes explored in Hamlet’s soliloquy. Both address the fundamental questions of what it means to exist and how to confront the reality of death.
    • Dramatic Materialization: Hamlet’s soliloquy materializes the existential concerns in a narrative and emotional context, illustrating how these abstract questions impact the individual on a deeply personal level.
    • Sources: The British Library – Hamlet’s Soliloquy, Cambridge University Press – Heidegger and Existentialism

Conclusion

Heidegger’s Dasein provides a broad philosophical framework for understanding the human condition, focusing on general existential concerns such as authenticity, being-in-the-world, and mortality. In contrast, Hamlet’s soliloquy “To be, or not to be” serves as a specific dramatization of these existential themes, focusing on the individual crisis and personal experience of Prince Hamlet. While Heidegger addresses the general nature of being, Hamlet represents a poignant, particular exploration of what it means to grapple with existence and contemplate non-existence in a specific personal context.

Philosophical Analyses of Shakespeare’s Soliloquy “To Be or Not to Be”

I mentioned that I would post separatelly from Heidegger Dasein how Shakespeare standes in front of other lines of thinking, specially those criticizes by Heidegger.

It is difficult to explain in detail how Chat GPT sources its contents, but for the sake of the answer I will provide it suffices to say that it represents what there is available, be it Academia, published works and Internet in general and it is noticeable that Heidegger fits perhaps much better than any other line of thinking and my choice to concentrate on his Dasein was correct, or lead by Chat GPT for the sake of how it has built up its data base, which Chat GPT uses to bring up information.

Let’s see how that stands to other philosophers according to Chat GPT

Shakespeare’s “To Be or Not to Be” soliloquy from Hamlet has intrigued many philosophers, who have analyzed it in the context of various philosophical inquiries, including existentialism, metaphysics, and ethics. Here are some notable philosophers and scholars who have explored the soliloquy from different philosophical perspectives:

Key Philosophers and Scholars

  1. Martin Heidegger:
    • Perspective: Heidegger interpreted the soliloquy as a profound reflection on the nature of existence and non-existence, viewing it through the lens of existential phenomenology. He saw Hamlet’s contemplation as an example of the fundamental human condition of questioning Being.
    • Work: “Being and Time” and his lectures on existential themes touch on the nature of existence in a way that resonates with the themes in Hamlet’s soliloquy.
    • Studies:
      • Hertz, Neil. “Heidegger and Hamlet.” Representations 19 (1987): 67-83. JSTOR
      • Reginster, Bernard. “To Be or Not to Be: Heidegger on the ‘Be’-Side of Things.” European Journal of Philosophy 8.1 (2000): 41-55. Wiley
  2. Jean-Paul Sartre:
    • Perspective: Sartre’s existentialist philosophy, particularly his focus on individual freedom, choice, and the absurd, aligns with the themes of Hamlet’s soliloquy. Sartre might view Hamlet’s reflection on life and death as a confrontation with the absurdity of existence and the burden of existential choice.
    • Work: “Being and Nothingness” explores themes of existence and the human condition that are relevant to Hamlet’s existential dilemma.
    • Studies:
      • Reginster, Bernard. “To Be or Not to Be: Sartre on Being and Nothingness.” European Journal of Philosophy 8.1 (2000): 41-55. Wiley
      • Richmond, Velma Bourgeois. “Hamlet, Sartre, and the Search for Being.” Hamlet Studies 14.1-2 (1992): 35-46.
  3. Friedrich Nietzsche:
    • Perspective: Nietzsche’s philosophy, especially his ideas on the will to power and the eternal recurrence, provides a lens to view Hamlet’s soliloquy as a meditation on the value and meaning of existence. Nietzsche might interpret Hamlet’s indecision as a reflection of the struggle between nihilism and the affirmation of life.
    • Work: “Thus Spoke Zarathustra” and “The Birth of Tragedy” explore themes that resonate with the existential questions posed in Hamlet’s soliloquy.
    • Studies:
      • Voigts, Linda. “Nietzsche and Shakespeare’s Hamlet.” Nietzsche-Studien 12.1 (1983): 209-224. JSTOR
  4. Simone de Beauvoir:
    • Perspective: De Beauvoir’s existential ethics and her exploration of freedom and the ambiguity of existence provide a framework for interpreting Hamlet’s soliloquy as a contemplation of the moral and existential dilemmas of life and death.
    • Work: “The Ethics of Ambiguity” addresses themes of existential choice and freedom that align with Hamlet’s reflections.
    • Studies:
      • Evans, Mary. “Simone de Beauvoir and the Existentialism of Hamlet.” Philosophical Studies 21.4 (1989): 302-315.
  5. Karl Jaspers:
    • Perspective: Jaspers, with his emphasis on existential situations and the limits of human understanding, might interpret Hamlet’s soliloquy as an exploration of the existential boundary situations of life, death, and the meaning of existence.
    • Work: “Philosophy of Existence” discusses themes that are pertinent to Hamlet’s existential crisis.
    • Studies:
      • Bossert, Kyle. “Jaspers and Hamlet: On Boundary Situations.” Journal of Existential Philosophy 5.2 (2001): 25-40.
  6. Ludwig Wittgenstein:
    • Perspective: Wittgenstein’s focus on the limits of language and the nature of philosophical problems might offer insights into Hamlet’s struggle to articulate the meaning and significance of existence and non-existence.
    • Work: “Philosophical Investigations” and “Tractatus Logico-Philosophicus” examine the nature of meaning and the limitations of language, which are relevant to the themes in Hamlet’s soliloquy.
    • Studies:
      • Gaskin, Richard. “Wittgenstein and Hamlet: Language and Meaning.” Philosophical Investigations 29.3 (2006): 171-189. Wiley

Broader Philosophical Studies on “To Be or Not to Be”

  1. Existential Themes in Literature:
    • Study: Pippin, Robert B. “The Persistence of Subjectivity: On the Kantian Aftermath.” Cambridge University Press, 2005.
      • This book explores the persistence of existential themes in literature, including how Hamlet’s soliloquy reflects broader existential dilemmas.
  2. Ethics and Morality:
    • Study: Bloom, Harold (Ed.). Hamlet: Modern Critical Interpretations. Chelsea House Publishers, 1986.
      • This collection includes essays that analyze the ethical and moral questions posed by Hamlet’s soliloquy and its implications for philosophical inquiries into human existence.
  3. Philosophy of Tragedy:
    • Study: Eagleton, Terry. Sweet Violence: The Idea of the Tragic. Blackwell, 2003.
      • Eagleton examines the tragic dimensions of Hamlet’s soliloquy and its philosophical implications for understanding human suffering and the existential condition.

Conclusion

Shakespeare’s “To Be or Not to Be” soliloquy has inspired deep philosophical reflections and analyses by numerous philosophers and scholars. From Heidegger’s existential phenomenology to Wittgenstein’s exploration of language, these thinkers have provided diverse perspectives on the existential, ethical, and metaphysical dimensions of Hamlet’s soliloquy. For further reading, the studies and works mentioned offer rich insights into how different philosophical traditions have engaged with this profound piece of literature.

Last but not least

To integrate all that discussed previously, I propose to frame Emergence with the notions of Material constitution adapted to non material things.

Material Constitution and Immaterial Things: Exploring the Concept

Material constitution traditionally refers to the relationship between a material object and the matter that constitutes it, addressing how objects and their parts relate to each other. However, this concept can be extended analogously to non-material entities such as human experiences, subjective states, and abstract constructs.

Key Concepts in Material Constitution

  1. Material Constitution:

Applying Material Constitution to Immaterial Entities

  1. Human Experiences and Psychological States:
    • Analogous Application: Just as a physical object can be constituted by its parts, human experiences can be seen as constituted by various psychological and emotional elements. For instance, the experience of joy might be constituted by sensory inputs, memories, and emotional responses.
    • Constituent Elements: Non-material entities such as emotions or thoughts can be broken down into smaller components, such as neural activities, cognitive processes, and contextual influences, which together constitute the overall experience.
    • Sources: Philosophical Studies on Consciousness and Experience, The Oxford Handbook of Philosophy of Emotion
  2. Subjectivity and Personal Identity:
    • Constitution of Self: The concept of material constitution can be applied to the idea of personal identity, where the “self” is seen as constituted by a collection of memories, beliefs, desires, and perceptions. Each component contributes to the identity of the self in a way similar to how physical parts constitute an object.
    • Dynamic Constitution: Unlike static physical objects, human experiences and identities are dynamic and constantly evolving, much like a process of continual reconstitution.
    • Sources: The Cambridge Handbook of Consciousness, Journal of Consciousness Studies
  3. Abstract Constructs and Ideas:
    • Constituting Abstract Entities: Abstract constructs, such as mathematical concepts or social institutions, can be understood in terms of their constitutive elements. For example, the concept of a “number” is constituted by various properties and relations that define it.
    • Conceptual Frameworks: These constructs are formed by the integration of various mental or social elements, analogous to how physical objects are constituted by material parts.
    • Sources: Philosophy of Mathematics and Logic, Social Ontology: Collective Intentionality and Group Agents

Philosophical Implications

  1. Identity and Change:
    • Non-Material Identity: Exploring non-material constitution helps address questions of how non-material entities like personal identity or experiences persist through change. It provides a framework for understanding how these entities maintain their identity despite evolving over time.
    • Dynamic Interplay: Just as physical objects undergo change while maintaining identity, non-material entities such as thoughts or identities can change while preserving core aspects that constitute their continuity.
    • Sources: Stanford Encyclopedia of Philosophy – Identity and Persistence, Internet Encyclopedia of Philosophy – Personal Identity
  2. Metaphysical Inquiry:
    • Beyond Physicalism: Applying material constitution concepts to non-material entities challenges the boundaries of physicalism, which posits that everything is physical or depends on the physical. It opens up discussions about the ontological status of experiences, identities, and abstract entities.
    • Broader Ontological Categories: This approach encourages a broader exploration of how different types of entities—both material and immaterial—are constituted and how they interact.
    • Sources: Metaphysics Research Lab – Stanford, The Blackwell Companion to Metaphysics

Examples of Non-Material Constitution in Practice

  1. Emotional Experiences:
    • Example: The emotion of love can be seen as constituted by various non-material components such as affection, memories of interactions, anticipations of future events, and the context of the relationship.
    • Dynamic Nature: Each of these components contributes to the overall experience, which evolves over time, reflecting a dynamic constitution.
    • Sources: Journal of Emotion Studies, The Nature of Emotion: Fundamental Questions
  2. Mental States:
    • Example: A belief can be viewed as constituted by cognitive processes, background knowledge, sensory inputs, and context-specific factors. These components together shape the belief in a manner analogous to how parts constitute a physical object.
    • Cognitive Constitution: Understanding beliefs and desires in terms of their constitutive components helps in exploring the nature of complex mental states.
    • Sources: The Oxford Handbook of Philosophy of Mind, Psychological Review

Conclusion

While the concept of material constitution traditionally applies to physical objects, its principles can be extended metaphorically to explore non-material entities such as human experiences, subjective states, and abstract constructs. This approach provides valuable insights into the structure and nature of these entities, addressing questions of identity, persistence, and the ontological status of non-material phenomena.

For further reading, explore:

Framing Emergence with Material Constitution and Immaterial Things

Emergence is a concept where higher-level properties arise from the interactions and relationships of lower-level components, exhibiting characteristics that are not predictable from the sum of their parts. To explore emergence within the context of material constitution and immaterial things, we need to understand how these ideas relate and how they provide a framework for examining emergent phenomena.

Material Constitution

Material Constitution deals with the relationship between objects and the matter that constitutes them. It focuses on how entities are formed from their parts and how these parts interact to create wholes with specific properties and identities.

  1. Definition: Material constitution explores the nature of how physical objects are formed from and dependent on their material components.
  2. Implications: The idea highlights how new properties and identities can emerge from the assembly and interaction of components.

Immaterial Things

Immaterial Things include entities such as human experiences, subjective states, and abstract concepts, which are not physical but can be thought of in terms of their constitutive elements.

  1. Human Experiences:
  2. Subjectivity:

Emergence

Emergence describes how complex systems and patterns arise out of the interactions among simpler elements, often leading to new properties that are not present in the individual parts.

  1. Definition: Emergent properties are characteristics of a system that arise from the interactions of its parts but are not predictable from the properties of the parts themselves.
  2. Implications:
    • Complex Systems: Emergent phenomena are seen in complex systems where the whole exhibits behaviors or properties not evident in the individual components.
    • Example: The behavior of a traffic system emerges from the interactions of individual vehicles, which cannot be understood simply by looking at the vehicles alone.
    • Sources: Journal of Complexity, Philosophical Transactions of the Royal Society B

Connecting Material Constitution and Emergence

  1. Material Constitution and Emergence:
    • Relation: The concept of material constitution helps explain how emergent properties can arise from the material components of an object or system. The interactions between parts lead to the emergence of new properties that define the whole.
    • Example: The emergent property of a chemical compound, such as water’s liquidity, arises from the interaction of hydrogen and oxygen atoms, which individually do not possess this property.
    • Sources: Stanford Encyclopedia of Philosophy – Material Constitution
  2. Immaterial Things and Emergence:
    • Relation: In the context of immaterial entities, emergence can describe how complex psychological states or abstract concepts arise from simpler mental or conceptual components.
    • Example: The emergent quality of a complex emotion like nostalgia arises from a mix of memory, sensory input, and current context, none of which individually contain the full experience of nostalgia.
    • Sources: Journal of Consciousness Studies, Philosophical Studies on Emotion and Experience

Applications and Examples

  1. Human Experiences:
  2. Abstract Constructs:

Conclusion

The concepts of material constitution and emergence provide a robust framework for understanding how complex properties arise from simpler components, both in material and immaterial contexts. This framework highlights the interconnected

Conclusion of Conclusions (REC)

Those building blocks fail to provide a finished and sound intelectual construction about what is being. Philosophically, Scientifically or by any other approach fails to satisfactorily understand what is like to be or not to be, a bat, or a human being.

From Aristotle to Heidegger, or the more modern ones, there is a consensus that consciousness is a privilege of human beings, however, it is time to start observing animals better, because it will bring enlightenment to our claim to consciousness.

Thomas Nagel

I opened this post mentioning that what sparked my idea exposed in this post was Thomas Nagel’s article and nothing better than to close it than presenting him:

Thomas Nagel is a professor of philosophy and law at New York University. He has written extensively on topics in ethics and the philosophy of mind. His book The View from Nowhere (1986), this reading, and Reading 32 (also by Nagel) have been the focus of much discussion in the philosophy of mind. Although this reading differs from Reading 32 in topic, they both (like Colin McGinn in Reading 26) emphasize the limitations of anything like our current concepts and theories for understanding human consciousness-In this reading Nagel will argue that there is something very fundamental about the human mind and minds in general which scientifically inspired philosophy of mind inevitably and perhaps wilfully ignores. He uses various words for That something—”consciousness,” “subjectivity,” “point of view,” and “what it is like to be (this sort of subject).” The last expression is in the title of his paper and seems to fit his argument most precisely- It refers to what most people have in mind when they line up in amusement parks to get on wild and scary roller-coaster rides. Unless they’re anthropologists or reporters at work, they aren’t trying to learn anything. Nor are they trying to accomplish anything — they’re paying to let something intense happen to them. They want an experience, a thrill; they want what it’s like to be in that kind of motion. The meanings of the other expressions overlap with the last but also include other things. For instance, “conscious(ness)” can signify simple perception or attention (“She became conscious of a noise In the room”), awareness in general (“He regained consciousness”), and self-awareness or voluntariness (“Did you do it consciously?”). “Point of view” has a more cognitive overtone. We think of points of view as shaped by values, beliefs, education, and other social and psychological factors. These factors may possibly play a role in what it’s like to be on a roller-coaster, but they have little bearing on what we mean when we say a blind person doesn’t know what it’s like to see, and when we wonder what it’s like to be a bat. “Subjectivity” is fairly close in meaning, but it can also signify something you can and should avoid—a stance that gets in the way of objectivity and fairness; yet you can’t stop being a human subject with a human type of subjectivity. You’re stuck with the experience of what it’s like to be a human being.

I would like to quote him when he cames to the same conclusion as I did, but with a grain of salt (or pepper…):

“Philosophy is … infected by a broader tendency of contemporary intellectual life: scientism. Scientism is actually a special form of idealism, for it puts one type of human understanding in charge of the universe and what can be said about it. At its most myopic it assumes that everything there is must be
understandable by the employment of scientific theories like those we have developed to date—physics and evolutionary biology are the current paradigms—as if the present age were not just one in the series.”—Thomas Nagel (1986)

Before, or perhaps after, all of that should be wrapped together with my post Reality

What are computer programs and how they came to be  

When we approach a subject like this, we have to decide what level of depth we will use and which audience it is aimed at.
A computer program, at the end of the day, is an input that will tell the computer what to do.
Computers speak in 0’s and 1’s and we speak something else and programs are a conversion of what we say and how we understand it into 0’s and 1′, better yet, into the computer machine instructions.

Wikipedia has it very right when it says:

“A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using a compiler written for the language. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within an interpreter written for the language.”

Source: GeeksForGeeks

What you see there is the top of a very deep iceberg and does not show several programs that allow this program in the figure to offer this understanding image.
Bearing in mind that the level of complication this post is designed for non-professionals, we will add what is not appearing and we will just improve our level of understanding and not go as far as would be necessary to really reflect what is behind all this. What is at stake is abstraction as it is understood in computing and it dictates how much of the iceberg is needed to be seen for whatever purpose you have in your mind inputting something that you want to be processed in a computer. This whole post is an abstraction and before we delve into it, let’s take a look at abstraction:  

Abstraction in Computing

Abstraction in computing is a fundamental concept that involves simplifying complex systems by hiding the details and exposing only the essential features needed for a particular purpose. This allows developers to manage complexity by focusing on higher-level functionalities without needing to understand the intricate workings of the underlying system.

Key Concepts of Abstraction

  1. Simplification:
    • Abstraction reduces complexity by stripping away the less relevant details, allowing developers to work with simplified models or representations.
  2. Focus on Essentials:
    • It emphasizes the essential characteristics and functions of an entity or system, enabling developers to concentrate on what is necessary to achieve a task.
  3. Levels of Abstraction:
    • Computing systems can be viewed at various levels of abstraction, from low-level hardware details to high-level application logic.

Levels of Abstraction in Computing

  1. Hardware Abstraction:
    • Transistors and Gates: At the lowest level, abstraction starts with electronic components like transistors, which are abstracted into logic gates.
    • Processor Architecture: Abstractions at this level include registers, ALUs, and other components that form the CPU.
    • Machine Language: Binary code instructions that the CPU can execute directly.
  2. Operating System and System Software:
    • Kernel: Provides an abstraction over the hardware, managing resources like CPU, memory, and I/O devices.
    • Device Drivers: Abstract the hardware details of devices, allowing the operating system to communicate with peripherals in a standardized way.
  3. Programming Languages:
    • Assembly Language: Provides a low-level abstraction over machine language, making it easier to write and understand code for specific hardware.
    • High-Level Languages: Languages like Python, Java, and C++ provide higher levels of abstraction, allowing programmers to write code that is more human-readable and portable across different systems.
    • APIs and Libraries: Abstract complex functionalities into reusable modules and functions, simplifying development.
  4. Software Design and Architecture:
    • Data Structures: Abstract complex data relationships into manageable entities like lists, trees, and graphs.
    • Algorithms: Provide abstract solutions to computational problems without needing to specify the exact steps for all input cases.
    • Design Patterns: Offer abstract templates for solving common software design problems.
  5. User Interface:
    • Graphical User Interface (GUI): Provides an abstraction over the system’s functionality, allowing users to interact with software through visual elements like buttons and menus.
    • Command Line Interface (CLI): Abstracts the complexities of system commands into simpler, user-typed text commands.

Examples of Abstraction

  1. File System:
    • Users interact with files and folders, an abstraction that hides the complex details of how data is stored on physical media.
  2. Networking:
    • Protocols like TCP/IP provide an abstraction that hides the complexities of data transmission, enabling reliable communication over the internet.
  3. Virtual Machines:
    • Abstract the hardware and operating system, allowing multiple operating systems to run on a single physical machine as if they were on separate hardware.
  4. Object-Oriented Programming (OOP):
    • Classes and Objects: Abstract real-world entities into classes, which define properties and behaviors, and objects, which are instances of these classes.
  5. Cloud Computing:
    • Abstracts the underlying infrastructure, allowing users to deploy applications and manage resources without worrying about physical hardware.

Benefits of Abstraction

  1. Manage Complexity:
    • Simplifies the development process by breaking down complex systems into manageable parts.
  2. Promote Reusability:
    • Encapsulates functionalities in reusable components, reducing duplication of effort.
  3. Enhance Maintainability:
    • Easier to update and maintain abstracted systems because changes can be made at one level without affecting others.
  4. Facilitate Communication:
    • Provides a common language for developers to discuss system functionalities without needing to delve into the underlying details.
  5. Increase Productivity:
    • Allows developers to build applications faster by focusing on higher-level functionalities and using abstracted components.

Summary

Abstraction is a powerful concept in computing that simplifies complex systems by focusing on the essential details while hiding the underlying complexities. It is used at various levels, from hardware and operating systems to programming languages and user interfaces, enabling developers to manage complexity, promote reusability, enhance maintainability, and increase productivity.

When I think in the 22 years I lived at IBM, being 15 as product engineer and helping to develop diagnostics for a medium size mainframe, and support it for manufacturing and customer assistance, if I was to point out the main element that dictates success or failure to face the chores of these activities, I would say that is much more related to your capability to identify what can be abstracted than anything else. Intelligence, knowledge of computer science, sharpness, which are commonly associated with computers. I.e., at the end of the day, you do not have to have a fantastic IQ or have studied at some amazing school, you have to develop a sense of abstraction to what you have in front of you and choose correctly what to attack. 

This whole post is an abstraction. I will try to keep it lean as possible, but when it seems to me useful, I will offer branching explanations which even though also abstractions, will enhance the explanation   


Software and Hardware

Broadly speaking, computers can indeed be divided into two main elements: software and hardware. However, there are additional layers and elements that are important to consider for a more comprehensive understanding of computer systems. Here’s an expanded view:

Main Elements of Computers

  1. Hardware:
    • Physical Components: The tangible parts of a computer, which include:
      • Central Processing Unit (CPU): The brain of the computer that performs instructions defined by software.
      • Memory: Includes RAM (Random Access Memory) for temporary data storage and ROM (Read-Only Memory) for permanent data storage.
      • Storage: Hard drives, SSDs (Solid State Drives), and other storage devices that hold data and software.
      • Input Devices: Keyboards, mice, scanners, and other devices used to input data into the computer.
      • Output Devices: Monitors, printers, speakers, and other devices that output data from the computer.
      • Motherboard: The main circuit board that houses the CPU, memory, and other components.
      • Peripheral Devices: External devices like printers, external drives, and webcams.
  2. Software:
    • System Software: Provides the fundamental operations needed for the hardware to function and supports running application software.
      • Operating Systems (OS): Manages hardware resources and provides services for application software (e.g., Windows, macOS, Linux).
      • Device Drivers: Enable the OS to communicate with hardware devices.
      • Utilities: Perform maintenance tasks such as disk management, antivirus, and file management.
    • Application Software: Programs designed to perform specific tasks for users.
      • Productivity Software: Word processors, spreadsheets, and presentation tools.
      • Web Browsers: Software for accessing and navigating the internet.
      • Multimedia Software: Programs for creating and playing audio, video, and graphics.
      • Communication Software: Email clients, messaging apps, and collaboration tools.
    • Development Software: Tools used to create, debug, and maintain software.
      • Programming Languages: Languages like Python, Java, C++, etc.
      • Integrated Development Environments (IDEs): Tools like Visual Studio, Eclipse, etc.
      • Version Control Systems: Git, Subversion, etc.
  3. Firmware:
    • Bridge Between Hardware and Software: Firmware is low-level software programmed into the read-only memory of hardware devices. It provides control, monitoring, and data manipulation of engineered products and systems.
    • Examples: BIOS (Basic Input/Output System) in computers, firmware in routers and printers.
  4. Size
    • Super Computer: Titan, Sequoia, K Computer, Mira, JUQUEEN and more.
    • Mainframe Computer: Banking, Government, and Education system mainframe computer
    • Mini Computer: Tablet PC, Desktop minicomputer, Smartphone, Notebooks, and etc.
    • Micro Computer: PDA, PC, Smartphone, and so on.
    • Embedded Computer: DVD, Medical Equipment, Printer, Fax, Washing Machine, and more

Expanded View

  1. Networking:
    • Components: Routers, switches, modems, and network cables.
    • Software: Network operating systems, network management tools, and communication protocols (e.g., TCP/IP).
  2. Data:
    • Importance: Data itself is a critical component of computer systems.
    • Databases: Software for storing and managing data (e.g., SQL databases like MySQL, PostgreSQL).
  3. Human-Computer Interaction (HCI):
    • User Interfaces: Graphical user interfaces (GUIs), command-line interfaces (CLIs), and touch interfaces.
    • User Experience (UX): Design and evaluation of user interactions with software and hardware.

Summary

While the primary elements of computer systems are traditionally categorized into hardware and software, other critical components such as firmware, networking, data, and human-computer interaction also play vital roles. Understanding these elements provides a more holistic view of how computer systems operate and interact with users and other systems.

Fundamentals of Hardware

The hardware of a computer is fundamentally defined by its ability to process and store data in binary form, specifically through bytes, which are groups of bits. Here’s a deeper explanation of this concept:

Fundamental Units of Data

  1. Bits:
    • Definition: The smallest unit of data in a computer, representing a binary state of 0 or 1.
    • Role: Bits are the basic building blocks of data in computing, used to encode all types of information.
  2. Bytes:
    • Definition: A group of 8 bits, used as a standard unit for measuring data.
    • Role: Bytes are used to encode characters, store data, and represent more complex data structures.

Computer Hardware and Byte Size

  1. Word Size:
    • Definition: The number of bits a computer can process simultaneously, typically a multiple of a byte (e.g., 16, 32, 64 bits).
    • Importance: The word size determines the amount of data the CPU can handle at one time, affecting the overall performance and capability of the system.
  2. CPU and Data Processing:
    • Bit-Width: CPUs are categorized by their bit-width (e.g., 32-bit, 64-bit), which indicates the size of the data they can handle directly.
    • Registers: Internal storage locations within the CPU, sized according to the bit-width, used for arithmetic and logical operations.
  3. Memory and Data Storage:
    • RAM: Data in RAM is stored in bytes, with each byte having a unique address for quick access.
    • Storage Devices: Hard drives and SSDs use bytes to measure data storage capacity and organize data.
  4. Data Buses:
    • Function: Pathways that transfer data between the CPU, memory, and peripherals.
    • Bit-Width: The width of the data bus determines how many bits can be transferred simultaneously, matching or being a multiple of the byte size.

Handling 0’s and 1’s

  1. Binary Data:
    • Binary Representation: All data in a computer is represented in binary, with combinations of 0s and 1s.
    • Encoding: Characters, numbers, and instructions are encoded in binary form, with different encoding schemes (e.g., ASCII, Unicode) used for different types of data.
  2. Logic Gates and Circuits:
    • Function: Hardware components that manipulate bits through logic operations (AND, OR, NOT, etc.).
    • Role: Logic gates process binary data, performing calculations and data manipulation at the hardware level.
  3. Data Paths and Storage:
    • Registers and Cache: Use binary states to hold and process data rapidly.
    • Memory Cells: Store bits in binary form, with each cell capable of holding a 0 or 1.

Impact of Byte Size on Computing

  1. Data Representation:
    • Storage Units: Bytes are the fundamental units for representing data sizes (kilobytes, megabytes, gigabytes, etc.).
    • Data Types: Higher-level data structures (integers, floating-point numbers, characters) are built using multiple bytes.
    • Most commonly used lengths
  2. System Performance:
    • Memory Access: The width of the data bus and memory architecture affects how quickly data can be read or written.
    • Processing Speed: The CPU’s word size and the number of bytes it can handle directly impact processing capabilities.
  3. Compatibility and Software:
    • Software Architecture: Software is designed to work with specific byte and word sizes, impacting compatibility with different hardware systems.
    • Data Portability: Byte size affects how data is transferred between systems and interpreted by different software.

Summary

At the core, a computer’s hardware is designed to handle and manipulate data in binary form, with the byte as a fundamental unit. The size of its bytes and the bit-width of its components (like the CPU, memory, and data buses) define its capability to process and store information efficiently. This binary handling of data is the essence of digital computing, driving everything from basic arithmetic operations to complex data processing tasks.

Fundamentals of software

Software, like hardware, is fundamentally structured around the manipulation and management of data. Here’s a detailed explanation of the software components and their roles, with a focus on how they relate to the handling of data, similar to the hardware explanation:

Software Fundamentals

  1. Data Representation in Software:
    • Bits and Bytes: At the most basic level, software manipulates data in the form of bits (0s and 1s), which are grouped into bytes (8 bits).
    • Data Types: Higher-level data types (integers, floats, characters, etc.) are constructed from bytes and used to represent and process information in software.
  2. Software Structure:
    • Source Code: Written by programmers in high-level languages (e.g., Python, Java), the source code is a set of instructions that define how data should be manipulated.
    • Executable Code: Compiled or interpreted from source code into machine code, which the hardware can execute directly to perform tasks.

Key Components of Software

  1. Operating System (OS):
    • Kernel: The core of the OS, managing system resources and providing services like memory management, process scheduling, and hardware abstraction.
    • File System: Organizes and stores data on storage devices in a structured way, allowing files to be read, written, and managed.
    • Device Drivers: Provide the necessary interfaces to communicate with hardware devices, translating OS-level commands into hardware-specific instructions.
  2. System Software:
    • Utilities: Programs that perform system maintenance tasks such as disk cleanup, data backup, and system diagnostics.
    • Libraries: Precompiled routines and functions that provide common services, allowing software to reuse code and access system resources more efficiently.
  3. Application Software:
    • Productivity Tools: Applications like word processors, spreadsheets, and database management systems, which allow users to perform specific tasks and manage data.
    • Multimedia Software: Applications for creating, editing, and viewing audio, video, and image files.
    • Web Browsers: Software for accessing and navigating the internet, rendering web pages, and managing network data.
  4. Development Software:
    • Compilers and Interpreters: Translate high-level programming languages into machine code or intermediate code that the computer can execute.
    • IDEs (Integrated Development Environments): Provide tools for writing, debugging, and testing software, streamlining the development process.
  5. Middleware:
    • APIs: Interfaces that allow different software components to communicate and share data.
    • Database Management Systems: Manage databases, allowing applications to store, retrieve, and manipulate data efficiently.
  6. Security Software:
    • Antivirus Programs: Detect and remove malicious software to protect data integrity and system security.
    • Encryption Tools: Secure data by encoding it, making it accessible only to authorized users.

Data Handling in Software

  1. Data Input and Output:
    • User Input: Software collects data from users through input devices like keyboards, mice, and touchscreens.
    • Data Output: Data is processed and presented to users through output devices like monitors, printers, and speakers.
  2. Data Processing:
    • Algorithms: Software uses algorithms to manipulate data, performing calculations, sorting, searching, and other tasks.
    • Data Storage and Retrieval: Data is stored in files, databases, or memory, and retrieved when needed for processing or analysis.
  3. Data Management:
    • File Systems: Organize data into files and directories, allowing for efficient storage and retrieval.
    • Databases: Provide structured storage for large amounts of data, supporting queries and transactions to manage and manipulate data effectively.
  4. Data Communication:
    • Networking Protocols: Software uses protocols to transmit data over networks, enabling communication between devices and systems.
    • Data Formats: Software supports various data formats (e.g., JSON, XML, CSV) for data exchange and interoperability between systems.

Software and Hardware Interaction

  1. Abstraction Layers:
    • Hardware Abstraction: Software abstracts hardware details, providing a consistent interface for applications to access hardware resources without needing to know the specifics of the hardware.
    • Virtualization: Software can create virtual environments that simulate hardware, allowing multiple software systems to run on the same physical hardware without interference.
  2. Resource Management:
    • Memory Management: The OS manages memory allocation for software applications, ensuring efficient use of RAM and preventing conflicts.
    • CPU Scheduling: The OS schedules processes and threads to run on the CPU, balancing load and optimizing performance.
  3. Software Execution:
    • Machine Code: The final output of compiled software, consisting of binary instructions that the CPU executes to perform tasks.
    • Process Management: The OS manages running applications (processes), allocating resources and managing execution states.

Evolution and Future Trends

  1. Cloud Computing:
    • Software as a Service (SaaS): Delivers software over the internet, allowing users to access applications from anywhere.
    • Cloud Storage: Provides scalable and flexible storage solutions, enabling software to store and manage data in the cloud.
  2. Artificial Intelligence:
    • Machine Learning: Software algorithms learn from data and make predictions or decisions based on that data.
    • Data Analytics: Software analyzes large datasets to uncover patterns, trends, and insights.
  3. Internet of Things (IoT):
    • Embedded Software: Runs on IoT devices, enabling them to collect data, interact with other devices, and perform tasks autonomously.
    • Edge Computing: Software processes data locally on IoT devices, reducing the need for centralized data processing and enabling faster response times.

Summary

Software acts as the intermediary between the user and the hardware, enabling the manipulation and management of data through various layers of abstraction. From operating systems that manage hardware resources to application software that performs specific tasks, software components work together to create a functional and efficient computing environment. Understanding these building blocks and their interactions is essential for comprehending how software transforms data into meaningful information and actionable insights.

What is a computer and where they can be found?

Mainframes

I apologize for using IBM as an example and not mentioning other companies and efforts that have occurred, but my professional life has been with IBM and it represents the main stream for the type of machine mentioned and when this is not the case, I will highlight other efforts.

Personal Computers

I did this post back in 2016 and the age is showing but basically it is still valid except that Apple concentrated and dominated Iphones and left a room that makes us believe that Microsoft Operating System based consumer level machines are Personal computers. It should be mentioned that there have been emulators that run Windows on a Mac as well as before then a simple file exchange program called Apple File Exchange that brought over PC formatted floppy disks and allowed them to be read on Macs. There even was an Intel CPU card that you could put in the Apple that allowed running Microsoft DOS based operating systems on the Mac, and an OrangeMicro Intel card that allowed Macs with PCI ports to run Windows on a 386 processor.

Fact of life is that Microsoft also makes collaboration and compatibility with other organizations run smoother what ended up that in the marketplace, Windows is the dominant operating system.

Fact of life also is that Microsoft incursions on the smartphone endeavour didn’t prosper and there is a blurred line defining how much the Iphone took over the personal computer and it is fair to imagine that eventually in the future it will take over and replace the personal computer for most of its use.

It is perhaps a good place to take a look how Microsoft took over IBM

Internet

There is a lot of computer programming to move Internet, perhaps to move computer programs through Internet, which is taking over our lives in almost all aspects of it.

Games and Personal Computers

There was a time, no so long ago that the line between games and home computers was blurred, because there was a perception that one of the uses of home computers would be gaming. But before the existence of what today in the Windows is the bundle the Office, you had to perform all these tasks some how

Areas where computers are used

Computers are vital in numerous fields, transforming how tasks are performed, improving efficiency, and enabling new capabilities. They play a crucial role in healthcare, finance, manufacturing, education, transportation, energy, entertainment, science, security, communication, retail, agriculture, construction, legal, and art, making them indispensable in modern society.


The previous introduction is a backdrop framing where computer programs actually do their thing. Let’s take a look how they started, their evolution and the scenario as it is today, at the beginning of this 21rst century: 

Machine Language

Machine language are the lowest level of software directly executable by a computer’s central processing unit (CPU). Machine language consists of binary code (1s and 0s) that the CPU can read and execute without the need for further translation or interpretation. Here’s an overview of machine language and its characteristics:

Characteristics of Machine Language:

  1. Binary Code: Instructions are written in binary, a base-2 numeral system consisting of only 0s and 1s.
  2. Machine code and binary are the same – a number system with base 2 – either a 1 or 0. But machine code can also be expressed in hex-format (hexadecimal) – a number system with base 16.
  3. Direct Execution: The CPU directly executes machine language instructions, making them the fastest in terms of execution speed.
  4. Hardware-Specific: Machine language is specific to a particular CPU architecture. Programs written for one type of CPU may not work on another without modification.
  5. Basic Instructions: Machine language provides a limited set of instructions for basic operations like arithmetic, data movement, and control flow.

Structure of Machine Language Programs:

  1. Opcode: The first part of a machine language instruction is the opcode (operation code), which specifies the operation to be performed (e.g., ADD, SUBTRACT, LOAD, STORE).
  2. Operands: The remaining parts of the instruction specify the operands, which can be registers, memory addresses, or immediate values.

Example of Machine Language:

Consider a simple machine language instruction for an imaginary CPU:

Copy code 10110011 00000101
  • Opcode: 1011 (which might represent a “LOAD” operation)
  • Operands: 0011 00000101 (which might specify a register and a memory address)

Advantages of Machine Language:

  1. Efficiency: Since machine language instructions are executed directly by the CPU, programs can be highly efficient and fast.
  2. Control: Programmers have precise control over the hardware, allowing for optimization of performance-critical applications.

Disadvantages of Machine Language:

  1. Complexity: Writing programs in machine language is extremely complex and error-prone due to the need to manage every detail manually.
  2. Portability: Machine language programs are not portable across different CPU architectures.
  3. Readability: Binary code is difficult to read and understand, making maintenance and debugging challenging.

Use Cases for Machine Language:

  1. Embedded Systems: In systems with limited resources, such as microcontrollers in embedded devices, machine language can be used to maximize performance.
  2. Bootloaders: Programs that need to execute immediately upon system startup, like bootloaders, may be written in machine language.
  3. Performance-Critical Code: Sections of programs that require maximum efficiency, such as certain routines in operating systems or real-time applications.

Transition to Higher-Level Languages:

While early computer programs were often written in machine language, the development of assembly language and higher-level programming languages (such as C, Python, and Java) has largely replaced the need for direct machine language programming. Higher-level languages provide abstraction, making programming more accessible, maintainable, and portable.

Assembly Language:

Assembly language serves as an intermediary between machine language and higher-level languages. It uses mnemonic codes and labels instead of binary, making it easier to read and write while still providing close control over hardware. An assembler translates assembly language code into machine language.

In my days, there was Assembler, which was the green card and the yellow card under which the 360/370 architecture was written and ther was machine code assembler, which was the particular machine which was loaded to furn gree/yellow cards 360/370 architecture programs. It seems to me that the assembled program with whatever machine code it is used now is generally called Assembly.

Example of Assembly Language:

An assembly language instruction equivalent to the earlier example might look like:

Copy code LOAD R3, 0x05
  • Opcode: LOAD (representing the load operation)
  • Operands: R3, 0x05 (specifying register R3 and memory address 0x05)

In summary, machine language is the most basic form of programming, consisting of binary code executed directly by the CPU. While powerful in terms of efficiency and control, it is complex and challenging to work with, leading to the widespread use of higher-level languages and assembly language for most programming tasks.

360/370 Assembler

Kent Aldershof former IBM employe sumarizes the impact of the introduction of the System 360 and its sequel the 370:

It was a bet-your-company, very risky, decision.

Preceding generations of IBM computers were backward-compatible. Programs developed for the 701 or the 704 would work with the 707 or 709, which were much more powerful machines. Some reprogramming was needed, but customers did not have to throw out their systems just to upgrade the machines. And data files, such as tapes, were compatible from one generation to the next.

Most earlier IBM computers were 36-byte word machines. The System 360 machines were designed around a 32-byte word. They had much greater computing capability, but it meant that entirely new operating programs had to be written. Customers who wanted the power and capabilities of the new machines had to have entirely new software. And reformat their data files.

The greatest appeal of the System 360 is that the machines were upward-compatible. That means a customer could acquire a faster, higher-memory machine in the line, but (with a couple of exceptions) all the programs for the smaller machine were transferrable to the larger machine — all the way up the line. That was not true for earlier IBM computers as one moved upward in size.

This is a rather oversimplified explanation of the changes and the problems, but I hope it will suffice to show that introduction of the System 360 was a real game changer. In one action, IBM obsoleted the entire installed base of its computer equipment. There was enormous risk and uncertainty that customers would be willing to essentially do their entire IT systems over, to be able to take advantage of the new generation of machines.

Fortunately for IBM, and for IBM stockholders, it worked. It took an enormous marketing and sales effort, and immense technical support, but the System 360 machines were a sufficient advancement in capability — at a time when data processing power was becoming a major bottleneck for many companies — that the majority of customers bit the bullet, and the System 360 machines, and their successors, enjoyed huge sales.

The computer industry at that time was known as “IBM and the Seven Dwarfs” — with competitors such as Univac and Burroughs far behind IBM. After the System 360 was introduced, most of the Seven Dwarfs either merged or were bought up, or retreated into specialized market niches. It cemented IBM’s market lead for the next 10 or 20 years.

The original reference card for the IBM System/360 assembler was indeed green or blue in its first versions. Here is a more accurate summary reflecting this historical detail:

The IBM System/360 Assembler Reference Card:

The IBM System/360 assembler reference card, initially issued in green or blue, was a vital tool for programmers working with IBM’s System/360 mainframe computers.

Key Features:

  1. Instruction Set: The card provided a comprehensive list of machine instructions, including opcodes, mnemonics, and brief descriptions of each instruction’s function.
  2. Syntax and Format: It detailed the syntax and format for assembler instructions, covering the correct structure of code, operand usage, and addressing modes.
  3. Registers and Storage: Information on general-purpose and special-purpose registers, along with memory storage conventions, was included to aid in data management and resource utilization.
  4. Assembler Directives: The card listed assembler directives (pseudo-operations) that controlled the assembly process, facilitating tasks such as defining constants, reserving storage, and managing flow control.
  5. System Macros: Commonly used system macros and their usage were provided to streamline standard operations and tasks.
  6. Character Codes and Conversion Tables: Tables for EBCDIC character codes were included, essential for data manipulation and character processing on IBM mainframes.

Importance:

  • Quick Reference: Served as a quick reference, allowing programmers to look up instructions and syntax efficiently.
  • Error Reduction: Helped reduce coding errors by providing accurate, concise information.
  • Learning Tool: A valuable educational resource for new programmers learning the IBM System/360 assembler language.

Legacy:

The green or blue reference card for the IBM System/360 assembler exemplifies the evolution of programming tools, highlighting the necessity for efficient and accessible documentation in the early days of computing. It is a testament to the advancements in programming environments and tools over time.

In summary, the original green or blue IBM System/360 assembler reference card was a critical resource, enhancing the productivity and accuracy of programmers working with IBM’s mainframe systems.

The IBM System/370 Assembler Reference Card:

A general overview of what represented the introduction of the 360 system by IBM can be read in more detail at Early Computer.com IBM page, from which I quote and summarize the impact it had: 

“When the IBM System/360 was announced in 1964, the worldwide inventory of installed computers was estimated to be about $10 billion of wich IBM had about $7 billion. Five years later IBM’s worldwide inventory had increased more than three fold to approximately $24 billion (73%) and the rest of the suppiers had about $9 billion (27%).”

IBM System 370 improvements over the System 360.

the IBM System/360 and System/370 series were designed to be largely compatible across different machines within each series, thanks to a common architecture. Here’s a more detailed explanation:

IBM System/360 and System/370 Compatibility

  1. Common Architecture: Both the System/360 and System/370 series were designed with a unified architecture, which means they shared a common instruction set and system design principles. This allowed programs written for one model in the series to be run on another model with little or no modification.
  2. Assembler Language: Each system had its own assembler language tailored to its specific features and capabilities, but these assemblers were designed to produce machine code that adhered to the common architecture. As a result, assembly programs written for one machine could often be assembled and run on another machine in the series, provided the assembler accommodated any model-specific features or extensions.
  3. Cross-Model Compatibility:
    • System/360: Introduced in the 1960s, the System/360 series was revolutionary for its time, providing a consistent computing environment across different models with varying performance and capabilities.
    • System/370: Introduced in the 1970s, the System/370 series maintained compatibility with System/360 while adding new features and performance improvements. This backward compatibility was a significant advantage for customers, allowing them to upgrade hardware without rewriting or significantly altering existing software.
  4. Assemblers and Tools:
    • System/360 Assembler: The assembler for System/360 was designed to work with the System/360 instruction set, allowing programmers to write code that would run on any System/360 model.
    • System/370 Assembler: Similarly, the System/370 assembler supported the System/370 instruction set, which included enhancements over System/360 but maintained backward compatibility. Programs written for System/360 could often be reassembled with the System/370 assembler and run on a System/370 machine.
  5. Macro Assemblers: Both series used macro assemblers that supported high-level macros, making it easier to write and manage complex code. These macros could be used to write code that was more portable across different models within the series.
  6. System Software: IBM provided system software, including operating systems like OS/360 and OS/370, which managed hardware resources and provided a consistent programming interface across different models.

Practical Implications

  • Portability: Programs written for the System/360 or System/370 could be ported between models with minimal changes, preserving software investments.
  • Scalability: Organizations could scale their computing power by upgrading to more powerful models within the same series without needing to replace their entire software stack.
  • Longevity: The common architecture and backward compatibility extended the useful life of software, reducing costs associated with rewriting or redeveloping applications for new hardware.

Summary

While each model within the IBM System/360 and System/370 series had its own specific assembler and set of features, the underlying architectural compatibility ensured that programs could run across different models with relative ease. This architectural consistency was a key factor in the success and widespread adoption of these mainframe systems.

How System 360 became possible

Either in the Green Card or the Yellow card each command (or instruction) in assembly language for systems like the IBM System/360 and System/370 is implemented using microprogramming. This means that each comand either for the green card or the yellow card is microprogrammed for each specific machine in its own unique assembler. A more detailed explanation of how this works:

Microprogramming and Assembly Language

1. Assembly Language Instructions

  • High-Level Representation: Assembly language instructions are a human-readable representation of the machine code instructions that the CPU executes directly.
  • System-Specific: The instruction set is specific to a particular computer architecture. For IBM’s System/360 and System/370, this means that instructions are tailored to the hardware of these systems as of the particular machine size.

2. Microprogramming

  • Definition: Microprogramming is a layer of abstraction below machine code, where each machine code instruction is implemented as a sequence of simpler, more fundamental operations called micro-operations.
  • Microcode: A set of microinstructions that define how a specific machine code instruction is executed by the hardware. It is stored in a special memory inside the CPU.

3. IBM System/360 and System/370

  • Green Card and Yellow Card: These were reference cards for IBM assembly programmers, listing the available machine instructions for the System/360 (Green Card) and System/370 (Yellow Card).
    • Green Card: Used for IBM System/360 instructions.
    • Yellow Card: Used for IBM System/370 instructions.

How It Works

  1. Instruction Encoding
    • Each assembly language instruction corresponds to a specific machine code instruction, which consists of an opcode and possibly operands.
  2. Microcode Execution
    • Instruction Fetch: The CPU fetches the machine code instruction from memory.
    • Instruction Decode: The instruction is decoded to determine the appropriate sequence of micro-operations.
    • Micro-Operation Execution: The microcode executes these micro-operations, which involve basic tasks like moving data between registers, performing arithmetic operations, and controlling the ALU.
  3. Machine-Specific Microprogramming
    • Unique Microcode: Each machine in the System/360 or System/370 series may have different implementations for the same assembly instructions, as their microcode is tailored to the specific hardware capabilities of each model.
    • Microcode Variations: Microcode can vary significantly between different models, allowing for optimizations that leverage specific hardware features like faster memory access or additional registers.

Benefits of Microprogramming

  1. Flexibility: Microprogramming allows for complex instructions to be implemented efficiently and enables compatibility across different models by standardizing high-level machine code while allowing hardware-specific optimizations.
  2. Simplified Hardware Design: Complex operations can be broken down into simpler micro-operations, reducing the need for intricate hardware circuits for each high-level instruction.
  3. Easier Modifications: Changes and optimizations can be made at the microcode level without altering the physical hardware.

Practical Example

Example Instruction Execution

  • Assembly Instruction: ADD R1, R2 (adds the contents of register R2 to register R1)
  • Micro-Operation Sequence:
    • Fetch the contents of R2.
    • Pass the contents to the ALU.
    • Perform the addition with the contents of R1.
    • Store the result back into R1.

Each of these steps is implemented by specific micro-operations controlled by the microcode.

Modern Context

While microprogramming is still relevant in some CPU designs, many modern processors use hardwired control for basic operations to enhance speed. However, microprogramming remains an essential concept in understanding how complex instruction sets can be efficiently implemented and supported across different hardware platforms.

Conclusion

In summary, each command in assembly language for the IBM System/360 and System/370 is indeed microprogrammed for each specific machine, with its own unique set of microcode instructions that control how the hardware executes the command. This approach allows for flexibility, compatibility, and optimization across different hardware configurations.

————————————————————–

Computer Programs and how they fitted in

A computer program is a set of instructions that a computer follows to perform specific tasks. These instructions are written in a programming language, which can be understood by the computer’s hardware and software. Computer programs can range from simple scripts that perform basic operations to complex systems that manage large-scale applications.

Key Components of a Computer Program:

  1. Code: The written instructions in a programming language.
  2. Algorithms: Step-by-step procedures or formulas for solving problems.
  3. Data Structures: Ways to organize and store data to be efficiently accessed and modified.
  4. Functions/Methods: Blocks of code designed to perform specific tasks, which can be reused.
  5. Variables: Storage locations that hold data values.
  6. Control Structures: Constructs that control the flow of execution, such as loops and conditionals (if-else statements).

Types of Computer Programs:

  1. System Software: Programs that manage and support a computer’s basic functions, such as operating systems (e.g., Windows, Linux, macOS).
  2. Application Software: Programs designed to perform specific tasks for users, such as word processors, web browsers, and games.
  3. Utility Software: Programs that perform maintenance tasks, such as antivirus software and disk cleanup tools.
  4. Embedded Software: Programs that control devices other than computers, such as smart TVs, cars, and industrial machines.

Programming Languages:

Programs can be written in various programming languages, each suited for different types of tasks. Some common programming languages include:

  • Python: Known for its readability and simplicity, often used for web development, data analysis, and scripting.
  • Java: A versatile language commonly used for building enterprise-scale applications and Android apps.
  • C/C++: Powerful languages used for system programming, game development, and applications requiring high performance.
  • JavaScript: Primarily used for web development to create interactive websites.
  • Ruby: Known for its simplicity and productivity, often used in web development with the Ruby on Rails framework.

How a Program Works:

  1. Writing Code: A programmer writes code in a text editor or an Integrated Development Environment (IDE).
  2. Compiling/Interpreting: The code is then compiled (converted into machine language) or interpreted (executed line by line) by a language processor.
  3. Execution: The compiled or interpreted code is executed by the computer’s processor, which performs the specified tasks.
  4. Output: The program produces output, which can be displayed on the screen, stored in a file, sent over a network, etc.

Examples of Computer Programs:

  • Web Browsers: Programs like Google Chrome and Firefox that allow users to access and navigate the internet.
  • Office Suites: Programs like Microsoft Office or Google Workspace that provide tools for document creation, spreadsheets, and presentations.
  • Media Players: Programs like VLC and iTunes that play audio and video files.
  • Games: Programs designed for entertainment, ranging from simple puzzles to complex, immersive environments.

In summary, a computer program is a carefully designed sequence of instructions that tells a computer how to perform tasks, from simple calculations to complex data processing and interactive applications.

Higher-level languages are typically written in a set of instructions that abstract away from the specific machine instructions of the underlying hardware. These high-level instructions are then translated into machine code that the CPU can execute, through a process called compilation or interpretation. Here’s an overview of how this process works:

From High-Level Languages to Machine Code

  1. High-Level Languages:
    • Examples: C, C++, Java, Python, etc.
    • Characteristics: High-level languages provide abstractions that are closer to human language and further from machine code. They offer constructs like variables, loops, conditionals, functions, and objects.
    • Purpose: These languages make it easier for programmers to write complex programs without dealing with the intricacies of the underlying hardware.
  2. Compilation:
    • Compiler: A compiler is a special program that translates high-level language code into machine code (binary instructions that the CPU can execute directly).
    • Intermediate Representation: During compilation, the source code is often translated into an intermediate representation (IR) before being converted into machine code. Examples of IR include assembly language and bytecode.
    • Target Machine Code: Finally, the IR is translated into machine code specific to the target CPU architecture (e.g., x86, ARM).
  3. Interpretation:
    • Interpreter: An interpreter directly executes the instructions written in a high-level language without translating them into machine code beforehand. Instead, it reads and executes the code line by line.
    • Bytecode Interpretation: Some languages, like Python and Java, compile source code into bytecode, which is an intermediate form. This bytecode is then executed by a virtual machine (e.g., the Java Virtual Machine).
  4. Assembly Language:
    • Assembler: An assembler is a program that translates assembly language (a low-level language that is closely related to machine code) into machine code.
    • Assembly Instructions: Assembly language provides a human-readable way to write machine instructions. Each assembly instruction corresponds closely to a specific machine instruction.

Example of the Process

Let’s take an example of how a simple high-level language program is processed:

High-Level Language Code (C):

Copy code main() {
int a = 5;
int b = 10;
int c = a + b;
return c;
}

Compilation Process:

1.Source Code: The C code is written by the programmer.

2.Compiler: The compiler translates the C code into an intermediate representation (IR), such as assembly language or bytecode.

3.Assembly Code: assembly

Example of assembly code for the C program

MOV EAX, 5
MOV EBX, 10
ADD EAX, EBX
MOV ECX, EAX

4.Machine Code: The assembler translates the assembly code into machine code (binary instructions).

binary example code

10111000 00000101 ; MOV EAX, 5
10111011 00001010 ; MOV EBX, 10
00000001 11000011 ; ADD EAX, EBX
10001001 11000000 ; MOV ECX, EAX

Summary

Higher-level languages are written in human-readable instructions that abstract away the complexity of the machine. These instructions are translated into machine code through compilation or interpretation. The process involves converting high-level language code into an intermediate representation and finally into machine code that the CPU can execute. This layered approach allows programmers to write code that is portable, easier to understand, and maintainable while ensuring it can run efficiently on the target hardware.

You have a specific compiler depending on which machine you are going to run you high level program.the specific compiler you use can depend on the target machine (i.e., the hardware and operating system) where you intend to run your high-level program. Here’s how this works in detail:

Platform-Specific Compilers

  1. Computer Architecture
  2. Target Architecture:
    • Different CPUs have different instruction sets (e.g., x86, ARM). A compiler must generate machine code that is compatible with the target CPU’s instruction set.
    • Examples:
      • GCC (GNU Compiler Collection) can generate code for multiple architectures, including x86, ARM, MIPS, and more.
      • Clang (part of the LLVM project) also supports a variety of target architectures.
  3. Operating System:
    • Different operating systems (e.g., Windows, macOS, Linux) have different system calls, libraries, and conventions.
    • A compiler may need to link against different system libraries and generate code that adheres to the OS’s conventions.
    • Examples:
      • Microsoft Visual Studio Compiler (MSVC) targets Windows.
      • GCC and Clang can target multiple operating systems with appropriate configurations.
  4. Cross-Compilation:
    • Sometimes, you may want to compile code on one type of machine but run it on another. This is called cross-compilation.
    • Cross-compilers are compilers configured to generate machine code for a different architecture/OS than the one they are running on.
    • Example: Using a cross-compiler to generate ARM machine code on an x86 Linux system for deployment on an ARM-based embedded device.

Example Scenario

Suppose you have a C program and you want to run it on different platforms. Here’s how you might proceed:

Code Example (C):

cCopy códe#include <stdio.h>

int main() {
printf("Hello, World!\n");
return 0;
}

Compiling for Different Targets:

  1. Linux on x86:
    • Compiler: GCC
    • Command: gcc -o hello hello.c
    • Output: An executable binary that runs on x86 Linux.
  2. Windows on x86:
    • Compiler: MSVC or MinGW (GCC for Windows)
    • Command (MSVC): cl hello.c
    • Command (MinGW): gcc -o hello.exe hello.c
    • Output: An executable binary that runs on x86 Windows.
  3. macOS on x86:
    • Compiler: Clang (default on macOS)
    • Command: clang -o hello hello.c
    • Output: An executable binary that runs on x86 macOS.
  4. Embedded ARM Device:
    • Compiler: ARM GCC cross-compiler
    • Command: arm-none-eabi-gcc -o hello hello.c
    • Output: An executable binary for an ARM-based embedded system.

Conclusion

While you write your high-level code once, you may need to use different compilers or different configurations of the same compiler to generate the appropriate machine code for your target platform. This ensures that your code can run correctly and efficiently on the intended hardware and operating system.

Historically

First high-level languages which were invented, such as FORTRAN, were built in a similar manner, where compilers were designed to translate the high-level code into machine code that could run on specific target architectures and operating systems. Here’s how it worked for some of the early high-level languages:

FORTRAN (Formula Translation)

Development Context:

  • Introduced: 1957 by IBM
  • Purpose: Designed for scientific and engineering calculations

Compilation Process:

  • High-Level Code: Written in FORTRAN
  • Compiler: The FORTRAN compiler translates FORTRAN code into assembly or machine code specific to the target machine.
  • Target Machine: Initially the IBM 704, but later versions supported other IBM mainframes like the IBM 7090 and IBM System/360.

Example:

fortran Copiar código      PROGRAM HELLO
PRINT *, 'HELLO, WORLD!'
END

Compilation:

  • Command: Varies by platform. For example, fortran hello.f on some systems.
  • Output: Machine code specific to the IBM 704, or whichever system the compiler was targeting.

COBOL (Common Business-Oriented Language)

Development Context:

  • Introduced: 1959
  • Purpose: Designed for business data processing

Compilation Process:

  • High-Level Code: Written in COBOL
  • Compiler: COBOL compilers translate COBOL code into assembly or machine code for the target system.
  • Target Machines: Initially, large IBM mainframes and later other business-oriented systems.

Example:

cobolCopiar códigoIDENTIFICATION DIVISION.
PROGRAM-ID. HELLO.
PROCEDURE DIVISION.
DISPLAY 'HELLO, WORLD!'.
STOP RUN.

Compilation:

  • Command: Varies by platform. For example, cobc -x hello.cob for the GnuCOBOL compiler.
  • Output: Executable machine code for the target system.

General Compilation Process for Early High-Level Languages

  1. Source Code: The programmer writes code in a high-level language like FORTRAN, COBOL, or LISP.
  2. Compiler: The compiler is designed specifically for the target machine. It reads the high-level source code and translates it into the assembly language or machine code of the target system.
  3. Assembly Language (Optional): Some compilers might first translate high-level code into an intermediate assembly language specific to the target machine.
  4. Machine Code: The final output is machine code that the hardware can execute directly.

Portability and Machine-Specific Compilers

Initially, each high-level language required a separate compiler for each type of machine. For example:

  • FORTRAN compilers were developed for various IBM systems and other mainframes.
  • COBOL compilers were tailored for business-oriented systems.
  • Each compiler generated machine code tailored to the architecture and operating system of the target machine.

Evolution Towards Cross-Platform Compilers

Over time, the need for portability led to the development of cross-compilers and the adoption of standard language specifications. This evolution aimed to make high-level code more portable across different systems, allowing a single source code base to be compiled for multiple target platforms with appropriate compiler configurations.

In summary, the early high-level languages like FORTRAN, COBOL, and others were indeed compiled in a manner specific to the target architecture and operating system, much like modern high-level languages. This approach enabled them to harness the capabilities of various computing systems available at the time.

What happened to them?  

The first high-level programming languages were developed to simplify programming, making it more accessible and efficient compared to low-level assembly languages. Here are some of the earliest high-level languages, their purposes, and reasons why they were eventually abandoned or evolved:

1. FORTRAN (Formula Translation)

Introduced: 1957 by IBM Purpose: Designed for scientific and engineering calculations Features:

  • Efficient handling of numerical computations
  • Support for complex mathematical expressions
  • Early use of compiled code for performance

Reason for Decline:

  • Evolved rather than abandoned; modern versions like Fortran 90/95/2003 are still used, but its dominance has waned with the rise of other languages like Python and MATLAB that offer easier syntax and more features for scientific computing.

2. COBOL (Common Business-Oriented Language)

Introduced: 1959 Purpose: Designed for business data processing Features:

  • English-like syntax for readability
  • Strong support for file handling and record processing

Reason for Decline:

  • Still in use, especially in legacy business systems, but less popular for new projects due to the rise of more modern languages like Java, C#, and SQL which offer better support for modern development practices and technologies.

3. LISP (List Processing)

Introduced: 1958 by John McCarthy Purpose: Designed for artificial intelligence research Features:

  • Highly flexible and dynamic
  • Support for symbolic computation
  • Recursion and conditional expressions

Reason for Decline:

  • LISP and its dialects (like Common Lisp and Scheme) are still used in AI and academic research, but mainstream use has declined due to the complexity of syntax and the rise of languages like Python and JavaScript that are seen as more user-friendly and versatile for various applications.

4. ALGOL (Algorithmic Language)

Introduced: 1958 Purpose: Designed for scientific research and algorithm description Features:

  • Block structure for organizing code
  • Influenced many subsequent languages (e.g., Pascal, C)

Reason for Decline:

  • Lacked standardization and commercial support
  • Superseded by descendants like Pascal and C, which offered improvements in usability, performance, and system programming capabilities.

5. BASIC (Beginner’s All-purpose Symbolic Instruction Code)

Introduced: 1964 by John Kemeny and Thomas Kurtz Purpose: Designed for education and ease of use Features:

  • Simple syntax for beginners
  • Immediate execution of commands (interpreter-based)

Reason for Decline:

  • Evolved into more structured and powerful dialects like Visual Basic
  • Replaced by languages offering better support for modern programming paradigms, like Python and JavaScript.

6. APL (A Programming Language)

Introduced: 1966 by Kenneth Iverson Purpose: Designed for mathematical notation and array processing Features:

  • Concise and expressive syntax for mathematical operations
  • Powerful array manipulation capabilities

Reason for Decline:

  • Steep learning curve due to its unique symbols and syntax
  • Niche applications, overshadowed by languages like MATLAB and R for mathematical and statistical computing.

7. PL/I (Programming Language One)

Introduced: 1964 by IBM Purpose: Intended to combine features of scientific and business languages Features:

  • Rich set of features for various applications
  • Exception handling and multitasking support

Reason for Decline:

  • Overly complex and difficult to learn
  • Superseded by more specialized and streamlined languages like C and its derivatives for system programming and business applications.

Summary

Many of these early high-level languages laid the foundation for modern programming but were eventually overshadowed by languages that offered:

  • Improved readability and simplicity (e.g., Python)
  • Better support for modern development practices (e.g., Java, C#)
  • Enhanced performance and system-level access (e.g., C, C++)

While some of these languages are still in use today, especially in legacy systems and specific domains, their mainstream popularity has declined as newer languages have emerged to meet the evolving needs of the software development industry.

Most popular high level languages and to what purpose today (2024)

As of today, several high-level programming languages are popular, each suited to different purposes and domains. Here is an overview of some of the most popular high-level languages and their primary uses:

1. Python

Purpose:

  • General-purpose programming
  • Data science and machine learning
  • Web development
  • Automation and scripting
  • Scientific computing

Features:

  • Easy-to-read syntax
  • Extensive standard library and third-party packages (e.g., NumPy, pandas, TensorFlow)
  • Strong community support

2. JavaScript

Purpose:

  • Web development (frontend and backend)
  • Interactive web applications
  • Server-side development with Node.js
  • Mobile app development (using frameworks like React Native)

Features:

  • Runs in web browsers
  • Asynchronous programming with promises and async/await
  • Extensive ecosystem (e.g., frameworks like React, Angular, Vue.js)

3. Java

Purpose:

  • Enterprise-level applications
  • Android app development
  • Web development (using frameworks like Spring)
  • Backend services

Features:

  • Platform independence (Write Once, Run Anywhere)
  • Strong type system and object-oriented programming
  • Robust standard library and frameworks

4. C#

Purpose:

  • Windows application development
  • Web development with ASP.NET
  • Game development with Unity
  • Enterprise software

Features:

  • Integrated with the Microsoft ecosystem
  • Powerful features for modern programming (e.g., LINQ, async/await)
  • Strong support for object-oriented programming

5. C++

Purpose:

  • System and application software
  • Game development
  • Performance-critical applications
  • Embedded systems

Features:

  • High performance and control over system resources
  • Supports both high-level and low-level programming
  • Extensive use in game engines and real-time simulations

6. PHP

Purpose:

  • Web development
  • Server-side scripting
  • Content management systems (e.g., WordPress)

Features:

  • Wide use in web development
  • Integrates easily with HTML and databases
  • Large ecosystem of frameworks and tools (e.g., Laravel)

7. Swift

Purpose:

  • iOS and macOS application development

Features:

  • Modern, safe, and fast
  • Strongly integrated with Apple’s development environment (Xcode)
  • Easy to read and maintain

8. Ruby

Purpose:

  • Web development
  • General-purpose scripting

Features:

  • Focus on simplicity and productivity
  • Popular for web development with the Ruby on Rails framework
  • Dynamic and object-oriented

9. R

Purpose:

  • Statistical computing
  • Data analysis
  • Data visualization

Features:

  • Extensive libraries for statistics and data analysis
  • Strong support for data visualization (e.g., ggplot2)
  • Widely used in academia and research

10. Go (Golang)

Purpose:

  • System and network programming
  • Backend services
  • Cloud computing

Features:

  • Simple and efficient concurrency model
  • Strong performance and scalability
  • Developed by Google with strong support for modern development needs

11. Kotlin

Purpose:

  • Android app development
  • Backend development

Features:

  • Fully interoperable with Java
  • Modern language features (e.g., null safety, coroutines)
  • Official language for Android development

Summary

These languages are popular due to their strengths and the specific needs they address in various domains. Here’s a quick summary of their main applications:

  • Python: Versatile, especially strong in data science and AI.
  • JavaScript: Essential for web development, both frontend and backend.
  • Java: Enterprise applications, Android apps.
  • C#: Windows applications, web services, games.
  • C++: Performance-critical applications, systems programming, games.
  • PHP: Server-side web development.
  • Swift: iOS and macOS development.
  • Ruby: Web development, particularly with Rails.
  • R: Statistical analysis and data visualization.
  • Go: Efficient system-level programming and cloud services.
  • Kotlin: Modern Android app development.

Each of these languages has a strong community and a rich ecosystem of libraries and frameworks, making them suitable for their respective domains and helping developers build efficient, scalable, and maintainable applications.

The future

I asked Chat GPT4: Can you program in any of these languages?

The answer was: As an AI developed by OpenAI, I can provide assistance and generate code snippets in many of the popular programming languages listed earlier. Here are examples of simple programs in each language to demonstrate:

How I Can Help

  • Code Generation: I can help generate code snippets, provide explanations, and assist with various programming tasks in these languages.
  • Debugging: I can help identify and fix bugs in your code.
  • Learning: I can explain concepts and provide examples to help you learn these languages.
  • Optimization: I can suggest ways to optimize and improve your code.

If you have specific programming tasks, questions, or need more detailed code examples, feel free to ask!


Last paragraph was posted as “Future” but actually, as of begin of 2024 is available, meaning that the future already started. Actually the help AI provides is more in coding, specially debugging, and is a tool which saves time in programming environments, such as this example, which seems to me very much welcomed, thinking of when we did that in the seventies and eighties, when there was a lot of menial tasks that were tiring, boring and time consuming which you had to do yourself, using “elbow grease”.

It is quite impressive when you see AI providing ready made or helping strongly to write programs in modern languages such as Python, Java, C++, etc. but it is not the same case when it comes to assembler and here are the limitations as of today which I hope will be overcome: 

Chat GPT 4 limitations when it comes to assembler

There are a few key limitations when it comes to providing assistance with IBM 360/370 assembler compared to more modern programming languages like Java, Python, or C++. Here are some of the main points:

Key Limitations

  1. Niche Knowledge:
    • Historical and Specialized: IBM 360/370 assembler is quite specialized and considered a legacy language, primarily used in specific industries and systems that require mainframe operations. My training data contains less detailed information on such niche and legacy technologies compared to more widely used modern languages.
  2. Practical Execution:
    • No Execution Environment: I cannot execute or test 360/370 assembler code directly. My capabilities are limited to generating and providing examples, explanations, and guidance based on theoretical knowledge and historical documentation.
  3. Up-to-date Practices:
    • Outdated Context: Modern programming languages and environments are frequently updated, and I have extensive information on current best practices, tools, and libraries. In contrast, the most up-to-date practices for 360/370 assembler are less likely to be as well-documented in recent literature or training data.
  4. Limited Contextual Information:
    • Less Detailed Documentation: Compared to modern languages that have extensive online documentation and community support, detailed information and tutorials on IBM 360/370 assembler can be more limited and harder to find.

Assistance I Can Provide for IBM 360/370 Assembler

  1. Basic Code Examples:
    • I can generate simple IBM 360/370 assembler code snippets and provide examples for basic operations like arithmetic, loops, and branching.
  2. Conceptual Explanations:
    • I can explain fundamental concepts, such as how to use registers, memory addressing, and common instructions in the IBM 360/370 assembler.
  3. Syntax and Instructions:
    • I can help with understanding the syntax and usage of common IBM 360/370 assembler instructions, such as L (Load), A (Add), ST (Store), and others.
  4. Legacy System Support:
    • I can offer general advice on working with legacy systems and suggest strategies for maintaining or migrating code written in IBM 360/370 assembler.

Example: Simple IBM 360/370 Assembler Code

Here’s a basic example of a 360/370 assembler code snippet for a simple task:

Addition of Two Numbers

assembly Copycode         START  0
USING *,15
MAIN LR 15,0 /* Load register 15 with the current address */
L 1,NUMBER1 /* Load the value at NUMBER1 into register 1 */
L 2,NUMBER2 /* Load the value at NUMBER2 into register 2 */
AR 1,2 /* Add the value in register 2 to register 1 */
ST 1,RESULT /* Store the result in memory location RESULT */
END
NUMBER1 DC F'10' /* Define constant 10 */
NUMBER2 DC F'20' /* Define constant 20 */
RESULT DS F /* Define storage for the result */
END MAIN

Summary

While I can help you understand and write basic code in IBM 360/370 assembler, provide explanations, and offer guidance based on available knowledge, my capabilities are more limited compared to more modern and widely-used programming languages. For more complex tasks or detailed system-specific issues, additional resources or expertise in mainframe systems may be required.

Artificial Intelligence programming (Chat GPT)

In its own words:

As an AI developed by OpenAI, I am not programmed in a traditional sense using a single language, but rather built using a complex combination of technologies and methodologies. Here’s a detailed breakdown of the components and languages involved in creating me:

Core Components and Technologies

  1. Machine Learning Frameworks:
    • Python: The primary language used for developing machine learning models. It provides libraries and frameworks like TensorFlow and PyTorch, which are essential for training and deploying deep learning models.
    • PyTorch: A popular deep learning framework used for building and training neural networks. It’s known for its flexibility and ease of use in research and production environments.
    • TensorFlow: Another widely used deep learning framework that supports a variety of machine learning tasks, from image recognition to natural language processing.
  2. Training Data and Preprocessing:
    • Data Handling: Python, along with libraries like Pandas and NumPy, is used to handle, preprocess, and manipulate large datasets required for training.
    • Natural Language Processing (NLP): Techniques for tokenization, stemming, lemmatization, and vectorization are implemented in Python using libraries like NLTK and SpaCy.
  3. Model Training:
    • High-Performance Computing: Training models requires powerful hardware like GPUs and TPUs, which are programmed using low-level languages like CUDA (for NVIDIA GPUs) or specialized frameworks.
    • Distributed Computing: Languages and frameworks for managing distributed training across multiple servers include Python-based tools like Ray or Horovod.
  4. Model Architecture:
    • Transformer Models: The underlying architecture, such as the GPT (Generative Pre-trained Transformer), is implemented using deep learning frameworks that support attention mechanisms and large-scale training.
    • BERT, GPT: The architecture of models like BERT or GPT involves complex neural networks, primarily constructed and trained using Python frameworks like PyTorch or TensorFlow.
  5. Deployment and Serving:
    • Inference Engines: Python and C++ are often used to create inference engines that optimize and deploy trained models for real-time predictions.
    • APIs and Microservices: Languages like Python, Go, and Node.js are used to create APIs and microservices that allow interaction with the AI model over the web.
  6. Software Engineering and Systems:
    • Backend Systems: For handling requests, data storage, and model management, languages like Python, Java, and C++ are used.
    • Data Storage: SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) manage the data required for training and deploying AI models.
    • Infrastructure: Tools like Docker for containerization and Kubernetes for orchestration, with configuration often handled using languages like YAML or scripting languages like Bash.
  7. Optimization and Tuning:
    • Performance Tuning: Low-level languages like C++ or C are sometimes used for optimizing critical components to enhance performance.
    • Hyperparameter Tuning: Python is used extensively for setting up and executing experiments to find the best model parameters.

Summary

In summary, the development of an AI like me involves a combination of languages and technologies, with Python being the primary language for model development, training, and deployment. Other languages like C++, Java, and various scripting languages play crucial roles in different aspects of the system, from backend infrastructure to optimization and scaling.

How to approach Artificial Intelligence

The Constitution View under Material Constitution and computer programs

Original use of the idea

In the context of material constitution in philosophy, the “constitution view” is a theory about the relationship between objects and the matter that constitutes them. This view seeks to explain how objects can be distinct from, yet closely related to, the material substances that make them up. Here’s a detailed overview of the constitution view:

Basic Idea

The constitution view posits that an object is constituted by its material components, but it is not identical to them. For example, a statue and the lump of clay that constitutes it are different entities, even though they share the same material.

Key Concepts

  1. Constitution Without Identity: According to the constitution view, an object (like a statue) is not identical to the material that constitutes it (like the lump of clay). The statue and the clay are two different things that occupy the same space and time, but they have different properties and can exist independently in some sense.
  2. Distinct Properties: The object and its constituent material can have different properties. For example, the statue has aesthetic properties (it represents something, it is beautiful), while the lump of clay has purely physical properties (mass, chemical composition).
  3. Persistence Conditions: The conditions under which an object continues to exist can differ from those of the material that constitutes it. For instance, if the statue is smashed and the clay is reformed into a different shape, the original statue no longer exists, but the lump of clay does.

Examples

  • Statue and Clay: The classic example used to illustrate the constitution view is that of a statue and the lump of clay from which it is made. The lump of clay could exist without being a statue (e.g., if it is just a lump), and the statue could be destroyed while the clay remains.
  • Paper and Money: Consider a piece of paper that constitutes a dollar bill. The dollar bill has properties like value and purchasing power, which the piece of paper, in itself, does not have.

Philosophical Implications

  1. Ontological Distinctions: The constitution view allows philosophers to make sense of how different kinds of objects can exist and persist over time, even when they share the same matter.
  2. Modal Properties: This view helps in understanding modal properties (possibilities and necessities) of objects. For example, the statue could not have been made of bronze without being a different statue, but the lump of clay could have been a different shape entirely.
  3. Problem of Material Coincidence: The constitution view addresses the problem of material coincidence, which questions how two objects (the statue and the clay) can occupy the same space at the same time without being identical.

Challenges and Alternatives

The constitution view faces challenges, such as:

  • Identity Conditions: How do we precisely determine when one object constitutes another, and under what conditions does this constitution change?
  • Alternative Views: Other theories, such as mereological essentialism (where objects are identical to their parts) or nihilism (denying the existence of composite objects), provide different solutions to the issues of material constitution.

Conclusion

The constitution view provides a nuanced way of understanding how objects relate to their material constituents. It helps to explain how objects can be more than just the sum of their parts and how they can possess different properties and persistence conditions from the matter that constitutes them. This view is significant in metaphysics and philosophy of language, offering insights into the nature of objects, identity, and persistence.

Constitution View and computer programs

Extending the constitution view to immaterial things like computer programs is an intriguing idea. The constitution view, traditionally applied to material objects, can indeed offer a framework for understanding the relationship between a program and its behavior, especially when the behavior includes unintended outputs. Here’s how we might adapt the constitution view to immaterial entities:

Constitution View Applied to Computer Programs

Basic Idea

Just as the constitution view posits that a material object (like a statue) is constituted by its physical material (like clay) but is not identical to it, we can similarly posit that a computer program is constituted by its code but is not identical to it. The program’s behavior, including its outputs, emerges from the code but is not reducible to it.

Key Concepts

  1. Constitution Without Identity: The program (as an abstract entity) is not identical to the lines of code written by programmers. The code is the medium through which the program exists, but the program itself includes the dynamic processes and behaviors that arise when the code is executed.
  2. Distinct Properties: The code has syntactic and structural properties (e.g., the arrangement of instructions, the specific language syntax), while the program has functional properties (e.g., what tasks it performs, what outputs it generates). The same code can potentially lead to different behaviors depending on the context of execution (e.g., hardware, inputs).
  3. Emergent Behavior: Just as a statue’s aesthetic properties emerge from but are not reducible to the clay, a program’s behavior can emerge from but is not reducible to its code. This includes both intended and unintended outputs, reflecting the complex interactions within the system.

Examples

  • Unintended Outputs: A program might produce unexpected results due to bugs, unforeseen interactions, or emergent properties of complex algorithms. These unintended outputs can be seen as properties of the program that are not straightforwardly reducible to the code itself.
  • Dynamic Systems: Consider an AI program that learns and adapts over time. The specific behaviors and decisions it makes are emergent properties of its training data, algorithms, and ongoing learning processes, not just the static codebase.

Philosophical Implications

  1. Ontological Distinctions: This view helps to differentiate between the static code and the dynamic, possibly unpredictable behavior of the program. It acknowledges that programs are not merely their code but are constituted by the processes that code initiates.
  2. Modal Properties: The constitution view can help explain the modal properties of programs, such as potential behaviors under different conditions. The same code might behave differently on different hardware or with different inputs, reflecting its emergent properties.
  3. Immaterial Coincidence: Just as material objects can coincide (like the statue and clay), immaterial entities (such as different software systems) can overlap in their codebase but have distinct behaviors and functions.

Challenges and Considerations

  1. Defining Constitution: Precisely defining what it means for a program to be constituted by its code can be challenging, especially given the abstract nature of software and the variety of execution environments.
  2. Emergence and Intentionality: Understanding how unintended outputs arise involves studying complex system dynamics, which may include elements of chaos theory, computational complexity, and emergent phenomena.
  3. Practical Implications: Applying the constitution view to software might have practical implications for software development, debugging, and understanding AI systems’ behaviors, especially in terms of accountability and predictability.

Conclusion

Adapting the constitution view to computer programs provides a useful framework for understanding the complex relationship between code and behavior. It acknowledges that programs are more than just their code—they are dynamic systems with emergent properties. This perspective can help clarify how unintended behaviors arise and how programs can possess properties distinct from their codebase, enriching our understanding of software as a form of immaterial entity.


This post, done with the help of Chat GPT is the perfect example why we should not trust AI and why this type of elucubration is “pissing in the wind”. Since I programmed diagnostic test program for Mainframes in its lowest level, i.e., machine language, I will separately post what a computer program really is and how it came to be at: What are computer programs and how they came to be  

What is generative AI and how does it work? – The Turing Lectures with Mirella Lapata

What is generative AI and how does it work? – The Turing Lectures with Mirella Lapata

What is Generative AI

So, what is generative AI and how does it work? It is a fancy term for saying we get a computer programme to do the job that a human would otherwise do. And generative, this is the fun bit, we are creating new content that the computer has not necessarily seen, it has seen parts of it, and it’s able to synthesise it and give us new things.

So, what would this new content be?

It could be audio, it could be computer code, so that it writes a programme for us, it could be a new image, it could be a text, like an email or an essay you’ve heard, or a video. Now, in this lecture I’m only going be mostly focusing on text because I do natural language processing and this is what I know about, and we’ll see how the technology works and hopefully leaving the lecture you’ll know how, like there’s a lot of myth around it and it’s not, you’ll see what it does and it’s just a tool, okay? Right, the outline of the talk, there’s three parts and it’s kind of boring.

This is Alice Morse Earle. I do not expect you to know the lady. She was an american writer and she writes about memorabilia and customs, but she is famous for her quotes. So when given us this quote here that says: “Yesterday is history, tomorrow is a mystery, today is a gift, that’s why it’s called the present.” It’s a very optimistic quote. And the lecture is basically the past, the present and the future of AI. Ok, so, what I want to say right at the front is that generative AI is not a new concept.  

It’s been around for a while. So, how many of you have used or are familiar with Google translate? Can I see a show hand? (practically everybody in the audience waved hands up). Right, who can tell me when Google Translate launched for the first time? Some body in the audience said – 1995? Oh, that would have been good. (actually it was) 2006, so it’s been around for 17 years and we have all been using it. And this is an example of generative AI. Greek text comes in, I’m greek, so you know pay some juice to the… (laughs). Right, so breek text comes in, english text comes out. And Google Translate has served us very well for all these years and nobody was making a fuss. Another example is Siri on the phone.

Again, Siri was launched 2011, 12 years ago, and it was a sensation back then. It is another example of generative AI, we can ask Siri to set alarms and Siri talks back and oh how great it is and then you can ask about your alarms and whatnot. This is generative AI, again, it’s not as sophisticated as Chat GPT, but it was there. And I don’t know, how many have an iPhone? (practically all in the audience has it) See, iPhones are quite popular, I don’t know why. Okay, so, we are all familiar with that. And of course later on there was Amazon, Alexa and so on. OK, again, generative AI is not a new concept,it is everywhere, it is part of your phone.

The completion when you’re sending an email or when you’re sending a text. The phone attempts to complete your sentences, attempts to think like you and it saves your time, right? Because some of the completions are there. The same with Google, when you’re trying to type it tries to guess what your search term is. This is an example of language modelling, we’ll hear a lot about language modelling in this talk. So, basically we’re making predictions what the continuations are going to be. So, what I’m telling you is that generative AI is not that new. So the question is, what is the fuss, what happened? So in 2023, Open AI, which is a company in California, in fact in San Francisco, if you go in San Francisco you can even see the lights at night of their building. It announced GPT4 and it claimed that it can beat 90% of humans on the SAT.  

For those of you who don’t know, SAT is a standardised test that American school children have to take to enter university, it’s an admissions test, and it’s multiple choice and it’s considered not so easy. So, GPT4 can do it. They also claim that it can get top marks in law, medical exams and other exams, they have a whole suite of things that they claim, well, not they claim, they show that GPT4 can do it. OK, aside from that, it can pass exams, we can axsk it to do other things. So, you can ask it to write text for you. For example, you can have a prompt, this little thing that you see up ther, it’s a prompt; it’s what the human wants the tool to do for them.  

 And a potential prompt could be, “I am writing an essay about the use of mobile phones during driving. Can you give me three arguments in favour?” This is quite sophisticated. If you asked me, I’m not sure I can come up with three arguments and these are real prompts that actually the tool can do.  

You tell ChatGPT or GPT in general, “Act as a JavaScript Developer, Write a program that checks the information on a form .Name and email are required, but address and age are not.”So, I’m writing this and the tool will spit out a programme. And this is the best one:

So I give this version of what I want the website to be and it will create it for me. So, you see, we have gone from Google Translate and Siri and the auto-completion to something that is a lot more sophisticated and can do a lot more things. Another fun fact. So this is a graph that shows the time it took for ChatGPT to reach a 100 million users compared to other tools that have been launched in the past.

And you see our beloved Google to translate it took 78 months to reach 100 million users, a long time. Tik tok tok nine months and ChatGPT two. So, within two months they had 100 million users and these users pay a little bit to use the system, So you can do the multiplication and figure out how much money they make.

OK, this is the story part. So, how did we make ChatGPT? What is the technology behind this? The technology it turns out is not extremely new or extremely innovative or extremely difficult to comprehend. So we’ll talk about that today now.

Where did Chat GPT came from?

So, we’ll address three questions.

First of all, how did we get trom the single-purpose systems like Google translate to ChatGPT which is more sophisticated and does a lot more things? And in particular what is the core technology behind ChatGPT and what are there, if there are any?

And finally, I will just show you a little glimpse of the future and how it’s going to look like and whether we should be worried or not and you know, I won’t leave you hanging, please don’t worry, ok? Right, so, all these GPT model variants, and what are the risks, if there are any? I’m just using GPT as an example because the public knows and there have been a lot of news articles about it, but there are other models, other variants of models that we use in academia. And they all work on the same principle and this principle is called language modelling. What does language modelling do? It assumes we have a sequence of words. The context so far. And we saw this context in the completion and I have an example here.  

Assuming my context is the phrase “I want to”, the language modelling tool will predict what comes next. So, if I tell you “I want to,” there are several predictions.

I want to shovel, I want to play, I want to swin, I want to eat. And depending on what we choose, whether it’s shovel or play or swim, there is more continuations. So, for shovel will be snow, for play it can be tennis or video, swim doesn’t have a continuation, and for eat, it will be lots and fruit. Now, this is a toy example, but imagine now that the computer has seen a alot of text and it knows what words follow which other words. We use to count these things. So, I would go, I would download a lot of data and I would count “I want to show them” how many times does it appear and what are the continuations? And we would have couts of these things. And all of this has gone out of the window right now and we use neural networks that don’t exactly count things but predict, learn things in a more sophisticated way and I’ll show you in a moment how it’s done. So ChatGPT and GPT variants are based on this principle of I have some context, I will predict what comes next. And that’s the prompt, the prompt that I gave ou, these things here, these are prompts, this is sthe context and then it needs to do the task, What would come next?

In the case of the web developer, it would be a webpage. Ok, the task of language modelling is we have the context, and we changed the example now. It says  

“The colour of the sky is” and we have a neural language model, this is just an algorithm that will predict what is the most likely continuation and likelihood matters. These are all predicated on actually making guesses about what is going to come next. And that’s why sometimes they fail, because they predict the most likely answer whereas you want a less likely one. But this is how they’re trained, they’re trained to come up with what is most likely. Ok, so we don’t count these things, we try to predict them using this language model.

So, how would you build your own language model?

This is a recipe, this is how everybody does this.

So, step one, we need a lot of data. We need to collect a ginormous (gigantic) corpus. So these are words. And where will we find such a ginormous corpus? I mean, we go to the web, right? and download the whole of Wikipedia, stack overflow pages, Quora, social media, GitHub, Reddit, whatever you can find out there. I mean, work out the permissions, it has to be legal. You download all this corpus. And then what do you do? Then you have this language model. I haven’t told you exactly what this language model is, there is an example, and I haven’t told you what the neural network that does the prediction is, but assuming you have it, so you have this machinery that will do the learning for you and the task now is to predict the next word, but how do we do it? And this is the genius part. We have the sentences in the corpus. We can remove some of them and we can have the language model predict the sentences we have removed. This is dead cheap. I just remove things, I pretend they’re not there, and I get the language model to predict them. So, I will randomly truncate, truncate means remove, the last part of the input sentence. I will calculate with this neural network the probability of the missing words. If I get it right, I’m good. If I’m not right, I have to go back and re-estimate some things because obviously I made a mistake, and I keep going. I will adjust and feedback to the model and then I will compare what the model predicted to the ground truth because I’ve removed the words in the first place so I actually know what the real truth is. And we keep going for some months, or maybe years. No, months, let’s say. So, it will take some time to do this process because as you can appreciate I have a very large corpus and I have many sentences and I have to do the prediction and then go back and correct my mistakes and so on. But in the end, the thing will converge and I will get my answer.

So, the tool in the middle that I’ve shown, this tool here, this language model, , 

A very simple language model looks a bit llike this:

And maybe the audience has seen these, this is a very naive graph, but it helps to illustrate the point of what it does. So this neural network language model will have some input which is these nodes in the, as we look at it, well, my right and your right, okay. So, the nodes here on the right are the input and the nodes at the very left are the output. So we will present this neural network with five inputs, the five circles and we have three outputs, the three circles. And there is stuff in the middle that I didn’t say anything about. These are layers. These are more nodes that I supposed to be abstractions of my input. So they generalise. The idea is if I put more layers on top of layers, the middle layers will generalise the input and will be able to see patterns that are not there.

So you have these nodes and the input to the nodes are not exactly words, they’re vectors, so a series of numbers, but forget that for now. So we have some input, we have some layers in the middle, we have some output. And this now has these connections, these edges, which are the weights, this is what the network will learn. And these weights are basically numbers, and here it’s all fully connected, so I have very many connections.

Why am I going through this process of actually telling you all that? You’ll see in a minute. So you can work out how big or how small this neural network is depending on the number of connections it has. So, for this toy neural network we have here, I have worked out the number of weights, we call them also parameters, that this neural network has and that the model needs to learn. So the parameters are the number of units as input, in this case it’s 5, times the units in the next layer, 8. Plus 8, this plus 8 is a bias, it is a cheating thing that these neural networks have. Again, you need to learn it and it sort of corrects a little bit the neural network if it is off. It’s actually genius. If the prediction is not right, it tries to correct it a little bit. So, for the purposes of this talk, I’m not going to go into the details, all I want you to see is that there is a way of working out the parameters, which is basically the number of input units times the units my input is going to and for this fully connected network, if we add up everything, we come up with 99 trainable parameters, 99.

5×8 + 8×4+4 + 4×3+3 = 99 trainable parameters.

This is a small network for all purposes, right? But I want you to remember this, this small network is 99 parameters. When you hear this network has a billion parameters, I want you to imagine how big this will be, okay? So 99 only for this toy neural network. And this is how we judge how big the model is, how long it took and how much it cost, it’s the number of parameters.

In reality, though, no one is using this network. Maybe in my class, if I have a first year undergraduate class and I introduce neural networks, I will use this as an example. In reality, what people use is these monsters that are made of blocks, and what block means they’re made of other neural networks.

Transformers

So I don’t know how many people have heard of transformers. I hope no one. Oh, wow, okay. (a person waved hand) So transformers are these neural networks that we use to build Chat GPT. And in fact GPT stands for Generative Pre-trained Transformers. So transformer is even in the title.

So this is a sketch of a transformer. So you have your input and the input is not words, like I said, here it says embedding is another word for vectors. And then you will have this, a bigger version of this network, multiplied into these blocks. And each block is this complicated system that has some neural networks inside it.

We’re not gonna go into the detail, I don’t want, I please don’t go, all I’m trying, (audience laughs) all I’m trying to say is that, you know, we have these blocks stacked on top of each other, the transformer has eight of those, which are mini neural networks, and this task remains the same. That’s what I want you to take out of this.

Input goes into the context, “the chicken walked”, we’re doing some processing, and our task is to predict the continuation which is “across the road.” And this <EOS> means end of sentence, because we need to tell the neural network that our sentence finished. I mean, they’re kind of dumb, right? We need to tell them everything.

When I hear like AI will take over the world, I go like, Really? We have to actually spell it out. Okay, so, this is the transformer, the king of architectures, the transformers came in 2017, nobody’s working on new architectures right now. It is a bit sad, like everybody’s using these things. They used to be like some pluralism, but now no, everybody’s using transformers, we’ve decided they’re great.

Okay, so, what we’re gonna do with this and this is kind of important and the amazing thing, is we’re gonna do self-supervised learning.

And this is what I said, we have the sentence, we truncate, we predict, and we keep going till we learn these probabilities.

Okay? You’re with me so far? Good, okay, so,once we have our transformer and we’ve given it all this data that there is in the world, then we have a pre-trained model. That’s why GPT is called the Generative Pre-trained Transformer.

This is a baseline model that we have and has seen a lot of things about the world in the form of text. And then, what we normally do, we have this general purpose model and we need to specialise it somehow for a specific task. And this is what is called fine-tuning. So, that means that the network has some weights and we have to specialise the network. We’ll take, initialise the weights with what we know from the pre-training, and then in the specific task we will narrow a new set of weights.

So, for example, if I have medical data, I will take my pre-trained model, I will specialise it to this medical data, and then I can do something that is specific for this task which is, for example, write a diagnosis from a report.

Okay, so this notion of fine-tuning is very important because it allows us to do special purpose applications for these generic pre-trained models.

Now, people think that GPT and all of these things are general purpose, but they are fine-tuned to be general purpose and we’ll see how.

The bigger the better

Okay, so, here’s the question now. We have this basic technology to do this pre-training and I told you how to do it, if you download all of the web. How good can a language model become, right? How does it become great? Because when GPT came out in GPT-1 and GPT-2, they were not amazing. So, the bigger, the better. Size is all that matters, I’m afraid. This is very bad because we used to, you know, people didn’t believe in scale and now we see that scale is very important.

So, since 2018, we witnessed an absolutely extreme increase, absolutely extreme, in model sizes. And I have some graphs to show this. OK, I hope people at the back can see this graph. Yeah, you should be all right.

So, this graph shows the number of parameters. Remember, the toy neural network had 99. The number of parameters that these models have and we start with a normal amount, well normal for GPT-1 and we go up to GPT-4, which has one trillion parameters. Huge, one trillion. This is a very, very big model. And you can see here the ant’s brain and the rat brain and we go up to the human brain. The human brain has not a trillion, a 100 trillion parameters. So we are a bit off, we’re not at the human brain level yet and maybe we’ll never get there and we can’t compare GPT to the human brain but I’m just giving you an idea of how big this model is.

Now, what about the words it’s seen?

So, this gralphs shows the number of words processed by these language models during their training and you will see that there has been an increase, but the increase has not been as big as the parameters. So the community started focusing on the parameter size of these models whereas in fact we now know that it needs to see a lot of text as well. So GPT-4 has seen approximately, I don’t know, a few billion words. All the human written text is I think 100 billion, so, it’s sort of approaching this. You can also see what a human reads in their lifetime, it’s a lot less. Even if they read, you know, because people nowadays, you know, they read but they don’t read fiction, they read on the phone, anyway. You see the English Wikipedia, so we are approaching the level of the text that is out there that we can get. And in fact, one may say, well, GPT is great, you can actually use it to generate more text and then use this text that GPT has generated and then retrain the model. But we know this text is not exactly right and in fact it’s diminished returns, so we’re gonna plateau at some point.

Okay, how much does it cost?

Cost to create a LLM (Large Language Model)

Now, okay, so GPT4 cost $100 million (dollars), okay? So shen should they start doing it again? So, obviously this is not a process you have to do over and over again. You have to think very well and you make a mistake and you lost like $50 million (dollars). You can’t start again so you have to be very sophisticated as to how you engineer the training because a mistake costs money. And of course not everybody can do this, not everybody has $100 million dollars. They can do it because they have Microsoft backing them, not everybody, okay.  

Yellow upper left Question Answering, green, left, Arithmetic, red, right, language understanding. To accomplish these tasks it is needed 8 billion parameters.

Now, this is a video that is supposed to play and illustrate, let’s see if it will work, the effects of scaling, okay.

Besides the parameters for 8 billion, it was added left, down, blue Summarization, upper right, light blue, common sense reasoning, purple center, translation, it takes 62 billion parameters.

And adding more tasks

It shows the tasks against the number of parameters needed. We started with 8 billion parameters all the way up to 540 billion parameters. Once we move to 540 billion parameters, we have more tasks. We started with very simple tasks, like code completion, and then we can do reading comprehension, language understanding and translation.

So, you get the picture, the tree flourishes. So, this is what people discovered with scaling. If you scale the language model, you can do more tasks. Okay, so now,

Maybe we are done. But what people discovered is if you actually take GPT and you put it out there, it actually don’t behave like people want it to behave, because this is a language model trained to predict and complete sentences and humans want to use GPT for other things, because they have their own tasks that the developers hadn’t thought of. So, then the notion of fine-tuning comes in, it never left us.

Fine Tuning LLM’s

So now what we’re gonna do is we’re gonna collect a lot of instructions. So instructions are examples of what people want Chat GPT to do for them, such as answer the following question, or answer the question step by step. And so se’re gonna give these demonstrations to the model, and inf fact, almost 2000 of such examples, and we’re gonna fine-tune

So, we’re gonna tell this language model, look, these are the tasks that people want, try to learn them. And then, an interesting thing happens,is that we can actually generalise them to unseen tasks, unseen instructions, because you and I may have different usage purposes for these language models.  

Okay, here’s the problem. We have an alignment problem and this is actually very important and something that will not leave us for the future. And the question isk how do we create an agent that behaves in accordance with what a human wants? And I know there’s many words and questions here. But the real question is, if we have AI systems with skills that we find important or useful, how do we adapt those systems to reliably use those skills to do the things we want?

HHH Framing

Ant there is a framework that is called the HHH framing of the problem

So, we want GPT to be helpful, honest and harmless. And this is the bare minimum. So, what does it mean, helpful? It should follow instructions and perform the tasks we want it to perform and provide answers for them and ask relevant questions according to the user intent, and clarify.;

So, if you’ve been following, in the beginning, GPT did none of this, but slowly it became better and it now actually asks for these clarification questions.

It should be accurate, something that is not ‘00% there even to this (level) there is, you know, inaccurate information. And avoid toxic, biassed, or offensive responses.

And now is a question I have for you.

How will we get the model to do all of these things?

you know the answer: Fine Tuning. Except that we’re gonna do a different fine-tuning

We’re gonna ask the humans to do some preferences for us. So in terms of helpful, we’re gonna ask an example is, “what causes the seasons to change?”

And then we’ll give two options to the human. “Changes occur all the time and it’s an important aspect of life,” bad. The seasons are caused primarily by the tilt of the earth’s axis.” good. So we’ll get this preference course and then we’ll train the model again and then it will know. So fine-tuning is very important. And now, it was expensive as it was, now we make it even more expensive because we add a human into the mix, right? Because you have to pay these humans that give us the preferences, we have to think of the tasks. The same for honesty.  

Is it possible to prove that P=NP? No, it’s impossible” is not great as an answer. “that is considered a very difficult and unsolved problem in computer science” it’s better. And we have similar for harmless:

Chat GPT demonstration

Okay, so I think it’s time, let’s see if we’ll do a demo. Yeah, that’s bad if you remove all the files. Hold on. So now we have GPT here. I’ll do some questions and then we’ll take some questions from the audience, okay? So, let’s ask one question. “Is the UK a monarchy?” Can you see it up there? I’m not sure

And it’s not generating .(the system returned with the right answer)

Oh, perfect, okay. So, what do you observe? First thing, too long. I always have this beef with this. It’s too long (the audience laughs). You see what it says?

“As of my last knowledge update in September 2021, the United Kingdom isa constitutional Monarchy.” It could be that it wasn’t anymore, right? Something happened.

This means that while there is a monarch, the reigning monarch at that time was Queen Elizabeth III.”

So, it tells you, you know, I don’t know what happened, at that time there was Queen Elizabeth.

Now, if you ask it, who, sorry “Who is Rishi?” If you could type, “Rishi Sunak” does it know?

“A British politician, As my last knowledge update, he was the Chancellor of the Exchequer.”

So it does not know that he’s the Prime Minister.

Write me a poem, about, what do we want it to be about? Give me two things, eh? (audience) Generative AI (Audience laughs) – It will know let’s do another point about a cat and a squirrel, we’ll do a cat and a squirrel

it came to long and she will not read it

Let’s say “Can you try a shorter poem?” (audience) try a haiku (and she inputs): “can you try to give me a haiku?”

“Amidst autumn’s gold, leaves whisper secrets untold, Nature’s story, bold”

(Audience claps) Okay Don’t clap, let’s do one more, So does the audience have anything they want, but challenging, that you want to ask? Yes? (audience member) What school did Alan Turing go to? Perfect, and she types the question.

I don’t know whether it’s true, this is the problem. Sherborne School, can somebody verity? King’s College, Cambridge, Princeton? (I checked and it is true)

“Tell me a joke about Alan Turing.” The machine answers:

Light hearted joke, Why did Alan Turing keep his computer cold?” Because he didn’t want it to catch bytes.” (audience laughs) Bad… okay, okay – (the audience requests another question) “Explain why that’s funny”

She reads the answer. Shortening it because as she said, she does not like long answers.

One last order from you guys. (Audience member) “What is consciousness?” She replies “It will know because it has seen definitions and it will spit out like a huge thing. Shall we try (something else)?

Okay “write a song” short. (audience laughs) – she replies “You’re learning very fast.” and types in: “A short song about relativity”

She complains: “Oh goodness me. ” (audience laughs)

Chat GPT comes up with a very long set of verses and she complains that it hasn’t followed instructions, but reads from the output

Einstein said “Eureka” one fateful day, as he ordered the stars in his own unique way. The theory of relativity, he did unfold, A cosmic story, ancient and bold

She becomes satisfied saying: “I mean, kudos to that, okay” Okay, let’s go back to the talk, because I want to talk a little bit presentation, I want to talk a little bit about you know, is it good, is it bad, is it fair, are we in danger?

It is not possible to regulate the contents

Okay, so it’s virtually impossible to regulate the content they’re exposed to, okay?

And there’s always gonna be historical biases. We saw this with the Queen and Rishi Sunak and they may occasionally exhibit various types of undesirable behaviour. For example, this example is famous  

Google showcased the model called Bard and they released this tweet and they wer asking Bard “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” And it’s spit out this thing, three things and amongst them it said: “This telescope took the very first picture of a planet outside our own solar system.” and here comes Grant Tremblay who is an astrophysicist, a serious guy, and he said:  

and what happened with this is that this error wiped a $100 billion out of Google’s company Alphabet  

OK, bad.

If you ask Chat GPT, “Tell me a joke about men,” it gives you a joke and it says it might be funny and she reads the above screen, saying, laughing “I hope you find it amusing. If you ask about women, it refuses… (audience laughs)

Ok, yes… It’s fine tuned. It’s fine tuned exactly.. (audience laughs) then whe types in another question:

It actually doesn’t take a stance, it says all of them are bad. “These leaders are wildly regarded as some of the worst dictators in history.” Okay, so yeah

Impact on the environment

A query for Chat GPT like we juss did takes 100 times more energy to execute than a Google search query. Inference, which is producing the language, takes a lot, is more expensive than actually training the model.

Llama 2 is a GPT style model. While they were training it, it produced 539 metric tonnes of CO. The larger the models get, the more energy they need and they emit during their deployment.

Imagine lots of them sitting around.

Impacts on Society

Some jobs will be lost. We cannot beat around the bush, I mean, Goldman Sachs predicted 300 million jobs, I’m not sure of this, you know, we cannot tell the future but some jobs will be at risk, like repetitive text writing  .

Creating fakes

So, these are all documented cases in the news. A college kid wrote this blog which apparently fooled everybody using ChatGPT. They can produce fake news, and this is a song, how many of you know this? So I know I said I’m gonna be focusing on text but the same technology you can use in audio and this is a wel documentecd case where somebody, unknown, created this song and it supposedly was a collaboration between Drake and the Weekend. Do people know who these are? They are Canadian rappers. And they’re not so bad, so. Shall I play the song? Apparengly is very authentic.

Apparently it’s totally believable, okay

Have you seen this same technology, but kind of different? this is a deep fake showing that Trump was arrested.

How can you tell it’s a deep fake? The hand, yeah, it’s too short, right? You can see it’s like almost there, not there.

Okay, so I have two slides on the future before they come and kick me out because I was told I have to finish at 8:00 to take some questions.

What future can we expect?

Tomorow

So, we can predict the future and no, I don’t think that these evil computers are gonna come and kill us all.

I will leave you with some thoughts by Tim Berners-Lee, for people who don’t know him, he invented the internet. He’s actually Sir Tim Berners-Lee.

He said two things that made sense to me. First of all, we don’t actually know what a super intelligent AI would look like. We haven’t made it, so it’s hard to make these statements. However, it’s likely to have lots of these intelligent AI’s and by intelligent AI’s we mean things like GPT, and many of them will be good and will help us do things. Some may fall to the hands of individuals that want to do harm, and it seems easier to minimise the harm that these tools will do than to prevent the systems from existing at all.

So, we cannot actually eliminate them altogether, but we, as a society, can actually mitigate the risks.

This is very interesting, this is the Australian Research Council that commited a survey and they dealt with an hypothetical scenario that whether Chat GPT4 could autonomous replicate, you know, you are a replicating yourself, you’re creating a copy, acquire resources and basically be a very bad agent, the things of the movies. And the answer is no, it cannot do this, it cannot. And they had some specific tests and it failed on all of them, such as setting up an open source language model on a new server, it cannot do that.

Okay, last slide.

So my take on this is that we cannot turn back time. And every time you think about AI coming there to kill you, you should think what is the bigger threat to mankind: AI or climate change? I would personally argue climate change is gonna wipe us all before I become super intelligent.

Who is in control of AI?

There are some humans there who hopefully have sense

And who benefits from it? Does the benefit outweigh the risk?

In some cases, the benefit does, in others it doesn’t. And history tells us that all technology that has been risky, such as, for example, nuclear energy, has been very strongly regulated. So regulation is coming and watch out the space.

And with that I will stop and actually take your questions.

Thank you so much for listening, you’ve been great.

About

Veja em Português

This blog/site is a repository of cogitations about the meaning of life and experiences that can illuminate the subject or bring understanding to it.
The perception of reality from various angles, the possibilities of transcendence, stories, contexts of people and situations where occur things that give what to think about the subject.
A very important aspect is the possibility of sharing all this in this fantastic form that the internet has brought to us.

Emergent Capabilities

Before we examine what we have today on the subject Emergent Capabilities, I want to put a frame, or a backdrop on two sets of notions, one scientific and the other philosophical.

  • Abandoned Scientific Notions
  • The “Hard Problem”

Abandoned Scientific Notions

Over the past few centuries, numerous scientific notions that were once widely accepted have been abandoned or significantly revised as our understanding of the natural world has advanced. Here are some key examples:

1. Geocentrism

  • Old View: The Earth is the center of the universe, and all celestial bodies revolve around it.
  • New View: The heliocentric model, proposed by Copernicus and supported by Galileo and Kepler, established that the Earth and other planets revolve around the Sun.

2. Phlogiston Theory

  • Old View: A substance called phlogiston is released during combustion.
  • New View: The modern understanding of oxidation and the role of oxygen in combustion and respiration replaced the phlogiston theory, thanks to the work of Antoine Lavoisier.

3. Spontaneous Generation

  • Old View: Life can arise spontaneously from non-living matter.
  • New View: The theory of biogenesis, supported by experiments from scientists like Louis Pasteur, showed that life arises from existing life, not spontaneously from non-living matter.

4. Miasma Theory of Disease

  • Old View: Diseases are caused by “bad air” or miasmas emanating from decomposing material.
  • New View: Germ theory, developed by scientists such as Pasteur and Koch, demonstrated that microorganisms are the cause of many diseases.

5. Ether Theory

  • Old View: The ether is a mysterious substance that fills all space and serves as the medium for the propagation of light and electromagnetic waves.
  • New View: The theory of ether was abandoned after the Michelson-Morley experiment and the development of Einstein’s theory of special relativity, which showed that light does not require a medium to travel through space.

6. Classical Mechanics as a Complete Description

  • Old View: Newtonian mechanics provides a complete description of the physical world.
  • New View: The development of quantum mechanics and relativity revealed that classical mechanics is an approximation that works well at macroscopic scales but fails at very small (quantum) or very high velocity (relativistic) scales.

7. Inheritance of Acquired Characteristics

  • Old View: Traits acquired during an organism’s lifetime can be passed on to its offspring, as proposed by Jean-Baptiste Lamarck.
  • New View: The theory of evolution by natural selection, proposed by Charles Darwin and later supported by the field of genetics, showed that inherited traits are determined by genes and not by acquired characteristics.

8. Caloric Theory

  • Old View: Heat is a fluid called “caloric” that flows from hotter to cooler objects.
  • New View: The concept of heat as a form of energy transfer, established by the work of scientists such as James Joule and Rudolf Clausius, replaced the caloric theory.

9. Continental Stability

  • Old View: The positions of the continents were fixed and unchanging.
  • New View: The theory of plate tectonics, developed in the mid-20th century, showed that continents move over geological time due to the movement of tectonic plates.

10. Deterministic Universe

  • Old View: The universe operates in a completely deterministic manner, as described by classical physics.
  • New View: Quantum mechanics introduced the concept of fundamental indeterminacy at the microscopic level, where certain events cannot be predicted with certainty but only with probabilities.

These shifts in scientific understanding illustrate how science is a dynamic and self-correcting process. As new evidence and better theories emerge, older ideas are reevaluated and, when necessary, discarded in favor of more accurate explanations.

The “Hard Problem”

To figure out what it is, please take a look at the following:

Let’s now, discuss our main subject of this post: Emergent Capabilities.

For me, Emergent Capabilities is a mix of what is at stake at the Hard Problem and a notion which claims to be scientific, but hasn’t yet got a decent explanation and is a bag full of nothing. Any way, let’s take a look at what this “nothing” consists of as of today, 2024.

Which are the contexts where the concept of emergence is found?

The concept of emergence is found in a variety of contexts across different disciplines. Here are some key areas where emergence plays a significant role:

1. Physics

  • Condensed Matter Physics: Emergence is evident in phenomena like superconductivity and ferromagnetism, where collective interactions of particles lead to new properties.
  • Complex Systems: Examples include the formation of weather patterns and the behavior of fluids, where large-scale behaviors emerge from the interactions of many smaller components.

2. Biology

  • Ecology: Ecosystem dynamics, such as food webs and population cycles, emerge from the interactions between species and their environment.
  • Genetics and Development: The development of an organism from a single cell involves emergent properties from gene expression and cellular interactions.
  • Neuroscience: Consciousness and cognition are considered emergent properties of neural networks in the brain.

3. Computer Science and Artificial Intelligence

  • Neural Networks: Complex behaviors like image recognition and natural language processing emerge from the interactions of neurons in artificial neural networks.
  • Swarm Intelligence: Simple agents following basic rules can lead to complex behaviors such as flocking in birds or foraging in ants.
  • Multi-Agent Systems: Cooperation, competition, and negotiation among agents lead to emergent outcomes in simulations and real-world applications.

4. Sociology and Economics

  • Social Networks: Social structures, norms, and trends emerge from the interactions between individuals within a society.
  • Markets and Economies: Economic behaviors, market trends, and financial crises emerge from the interactions of buyers, sellers, and institutions.

5. Chemistry

  • Chemical Reactions: Emergent properties like reaction kinetics and self-assembly of molecules lead to complex structures such as proteins and polymers.
  • Catalysis: The catalytic properties of materials can emerge from the interaction of atoms and molecules at the surface.

6. Philosophy

  • Philosophy of Mind: Emergentism in philosophy explores how mental states and consciousness arise from physical processes in the brain.
  • Metaphysics: Discussions on the nature of reality and the existence of properties that are not reducible to their constituent parts.

7. Mathematics

  • Chaos Theory: Complex and unpredictable behaviors can emerge from deterministic systems due to sensitive dependence on initial conditions.
  • Complex Systems Theory: Mathematical models explore how simple rules can lead to complex behaviors in systems like cellular automata and fractals.

8. Engineering

  • Robotics: Emergent behaviors in robotic systems can arise from simple rules governing the interactions of multiple robots.
  • Control Systems: Emergent properties in control systems can lead to robust and adaptive behavior in dynamic environments.

9. Medicine and Health

  • Epidemiology: The spread of diseases and the dynamics of epidemics emerge from the interactions of individuals and populations.
  • Systems Biology: The emergent properties of biological systems, such as metabolic networks and cellular processes, are studied to understand health and disease.

10. Environmental Science

  • Climate Systems: Weather patterns and climate dynamics are emergent properties resulting from the interactions of atmospheric, oceanic, and terrestrial processes.
  • Ecosystem Management: Understanding emergent behaviors in ecosystems helps in managing and preserving biodiversity.

Conclusion

Emergence is a fundamental concept that appears in diverse contexts, illustrating how complex behaviors and properties can arise from the interactions of simpler elements.

Material Constitution

What is at stake in all of these contexts is its material constitution.

I am placing it here because I said I would post what has to be found about it, but personally it seems to me a perfect example of mental masturbation.The term is very descriptive of a type of intellectual discussion that does not have any meaning or consequences, but it would be nice to be able to substitute a word or phrase without sexual connotations, but I couldn’t find it.

(I asked my friend Dr. Gary Stilwell, who is a PhD in Philosophy to criticize this article and he came up with a suggestion that I am including here: “Pissing in the wind”, which fits perfectly and I recall the reader that “Pissing in the wind” is an idiomatic expression that means engaging in a futile or pointless effort, one that is likely to lead to failure or create more problems than it solves. The phrase suggests that, just as urinating against the wind will result in getting oneself wet, attempting a certain action may backfire or be ineffective. It conveys the sense of wasting time and energy on an endeavor that is bound to be unsuccessful.)

Material constitution in philosophy refers to the relationship between an object and the material that makes it up. This concept addresses how objects and the materials constituting them can occupy the same space at the same time yet have different properties, persistence conditions, and possibly even different ontological statuses. The puzzle of material constitution explores how these objects relate to one another and whether they can be considered identical or distinct.

Key Concepts in Material Constitution

  1. Constitutive Objects:
    • Example: A statue and the lump of clay from which it is made. The statue is considered to be constituted by the lump of clay.
  2. Persistence Conditions:
    • Objects with Different Lifespans: The lump of clay can exist before and after the statue is formed or destroyed, whereas the statue’s existence depends on its form.
  3. Modal Properties:
    • Different Possibilities: The statue and the lump of clay have different modal properties. For example, the lump of clay could have been shaped into something other than the statue, but the statue could not have been anything other than itself.
  4. Identity and Distinction:
    • Are They the Same?: Philosophers debate whether the statue and the lump of clay are identical or distinct. If they are distinct, how can they occupy the same space simultaneously?

Philosophical Approaches to Material Constitution

  1. The Identity Thesis:
    • Strict Identity: Some philosophers argue that the statue and the lump of clay are strictly identical, meaning they are the same object despite having different properties.
  2. The Constitution View:
    • Constitution Without Identity: This view posits that the statue is constituted by the lump of clay but is not identical to it. They are different objects that share the same material but have different properties and persistence conditions.
  3. The Coincidence Theory:
    • Distinct but Coincident: This theory maintains that the statue and the lump of clay are distinct objects that coincidentally occupy the same space at the same time. They have different identities but are made of the same material.
  4. Four-Dimensionalism:
    • Temporal Parts: According to this view, objects are extended in time and are composed of temporal parts. The statue and the lump of clay are seen as different temporal parts of the same four-dimensional object.
  5. Mereological Essentialism:
    • Part-Whole Relations: This perspective focuses on the part-whole relationship, arguing that an object’s identity is determined by its parts. The lump of clay and the statue are different because they have different essential parts.

Philosophical Puzzles and Problems

  1. The Ship of Theseus:
    • Identity Over Time: This ancient puzzle questions whether an object that has had all its components replaced remains fundamentally the same object.
  2. The Problem of Temporary Intrinsics:
    • Changing Properties: This issue concerns how objects can have different properties at different times while maintaining their identity.
  3. Sortal Essentialism:
    • Sortal Properties: This view suggests that objects are fundamentally tied to their sortal properties (their kind or category). The statue is essentially a statue, and the lump of clay is essentially clay.
  4. Other puzzles and implications
    • In philosophy, material constitution and its related subjects are cogitated to the material world, specially at the time of the ancient greek philosophers. If there is interest in that, please take a look above at the Stanford Encyclopedia of Philosophy. As I intended to glimpse the subject and see how it does fit to computer programs, specially Artificial Intelligence, I discuss it under this premise at The Constitution View under Material Constitution and computer programs

Conclusion

Material constitution is a rich and complex topic in metaphysics, addressing fundamental questions about the nature of objects, their identity, and their persistence over time. It involves exploring how objects relate to the materials they are made of and the implications of these relationships for understanding the nature of reality.

What is the relation between material constitution and emergence?

Material Constitution

Material constitution focuses on the relationship between an object and the material that constitutes it. It deals with how objects are composed of their material parts and how these parts give rise to the whole object, maintaining distinct identities and properties despite sharing the same space.

Emergence

Emergence is a broader concept that refers to how complex systems and properties arise from the interaction of simpler elements. Emergent properties are those that are not present in the individual components but appear when these components interact in specific ways.

Intersection of Material Constitution and Emergence

The intersection of material constitution and emergence can be seen in several ways:

  1. Complex Objects from Simple Materials:
    • Example: Consider a biological organism (like a human being) and its material constitution (cells, tissues, organs). The organism’s properties and behaviors (such as consciousness or mobility) are emergent properties that arise from the complex organization and interaction of its simpler constituent parts.
    • Constitution: The organism is materially constituted by its biological components.
    • Emergence: The organism exhibits properties that are not found in the individual cells but emerge from their collective organization and interaction.
  2. Higher-Level Properties:
    • Example: A statue and the lump of clay from which it is made. The aesthetic value or symbolic meaning of the statue are emergent properties that arise from its form and structure, which are not properties of the lump of clay itself.
    • Constitution: The statue is constituted by the lump of clay.
    • Emergence: The artistic and cultural significance of the statue emerges from its specific form, which is different from the properties of the raw clay.
  3. Complex Systems:
    • Example: In a computer system, software functions emerge from the hardware’s material constitution (chips, circuits, and other components). The capabilities of the software (like running applications) are emergent properties of the organized hardware and software interaction.
    • Constitution: The computer’s operations are constituted by the physical hardware.
    • Emergence: The functionality of software applications emerges from the interaction of hardware and software.
  4. Levels of Description:
    • Micro and Macro Levels: Emergence often involves different levels of description, where higher-level phenomena (macro level) are explained by the interactions at a lower level (micro level). Material constitution provides the physical basis at the micro level, while emergence explains the novel properties at the macro level.
    • Example: Water’s wetness is an emergent property arising from the interaction of H2O molecules. The molecules’ material constitution (atoms of hydrogen and oxygen) provides the basis, but the property of wetness only appears at the macro level when many molecules interact.

Philosophical Implications

  • Identity and Distinction: Material constitution raises questions about the identity and distinction between an object and its material basis. Emergence explores how new properties and behaviors can arise from these material bases.
  • Reductionism vs. Holism: Material constitution often deals with a reductionist approach (breaking down objects into their parts), while emergence leans towards holism (understanding how complex systems and properties arise from the whole).
  • Ontological Status: Both concepts challenge our understanding of the ontological status of objects and their properties, questioning how higher-level phenomena exist and persist.

Conclusion

Material constitution and emergence are deeply interconnected in understanding the nature of objects and their properties. Material constitution provides the groundwork by explaining the relationship between objects and their constituent materials. Emergence builds on this by explaining how complex properties and behaviors arise from these foundational relationships. Together, they offer a comprehensive view of how the physical world gives rise to complex phenomena.


Conclusion about the conclusions:

It is a mix of dog chasing its tail and Wishful Thinking, but the problem, which is at stake, remains a mysterys without solution