Do we know what matter is?

Introduction

Matter is at the same time something very mundane and incredibly complex. We deal with material objects all the time and jet when we are press to define what matter is, we end up with a unsatisfactory answer like “matter is that of which things are made of”. And then we can give examples of different materials (type of matter): steel, water, soil, rock, air,…

Our understanding of matter has changed a lot over the last century with new scientific insights, but much is still debated, many of the positions defended have a long history at their backs. This topic has been a central issue of philosophy for over 2000 years, clearly a lot of though has gone into the matter.

Philosophical understanding of matter

Matter has been a topic of great discussion from the time of the Greeks, and still is central debate between religious people, who in some way or other believe on the existence of non-material, let’s say spiritual, things and atheist or materialists who state that matter is all that exist.

The discussion of the essence of matter in ancient Greeks

The ancient Greek thought matter was formed by mixing 4 elements: water, air, fire and earth. Different materials from wood to iron and gold were thought to be mixtures on different percentages of the four basic elements. Thales of Miletus thought the first principle was water, this came from the observation that there is moisture everywhere so water is par of everything. His pupil Anaximander said that water couldn’t be the first, principia, since water couldn’t produce fire, the same happened with the other elements, non were able to create their opposite, thus there was no principal element as such. He nonetheless proposed a perfect, unlimited, eternal and indefinite substance, the Aperion, from which all was created. Anaximenes, Anaximandres pupil, returned to the elemental principia, theory but proposed air as the original element from which everything else is created. He said that though rarification air produced fire and through compression water and subsequently earth.

Pythagoras of Samos said that numbers not matter were the origin of everything. Heraclitus said that no matter was possible since all in life is flux and continuous change. On the other hand Parmenides believed that the universe was static and the hold the only truth, our senses however were changing and unreliable, rendering knowledge of truth impossible. Leucippus and Democritus held that matter was composed of indivisible constituents, atoms. As one can see there was a long debate of what matter was even as far back as the 6th century before Christ.

John Dalton discovered the atom in 1803, giving evidence towards the views held by Leucippus and Democrates.

Modern philosophy: idealism and materialism

The Idealism movement is a group of thinkers which held that reality is principally mental, all we know of the world is what our mind interprets of the world in itself which is out of our reach. The most well known of these thinkers are: Immanuel Kant, G. W. F. Hegel and A. Schopenhauer. In their theories matter is relegated to a secondary place since the world is basically immaterial, or at least our knowledge of the universe is strongly conditioned by our mental process meaning the original essence of matter is not critical since all we ever have a chance to know is mental.

On the opposite side the materialism is a form of philosophical monism which holds that matter is the fundamental substance in nature, and that all phenomena, including mental phenomena and consciousness, are results of material interactions.

There are other philosophical theories that fall in dualistic or pluralistic reality, meaning the world is not purely mental, spiritual or material but is a composition of 2 or more aspects. Descartes is probably one the best known representative of the dualistic nature of reality.

 

Scientific understanding of matter

From a scientific perspective in classical mechanics a material object is characterized with a position in space and time and some physical properties like volume and mass. All of these properties of matter have a very specific definition and meaning but the only definition of matter that will extract from mechanics is: “matter is that which lays on a position in space and time and occupies a volume and has mass” .

If we go further into quantum and relativistic mechanics we find that ordinary matter is structured, formed by atoms which themselves are composed of particles: electrons , neutrons and protons. neutrons and protons are also composite particles each consisting of 3 quarks. Other more exotic forms of matter exist, formed by all sorts of particles: muons, tauons, neutrinos (3 flavous), mesons (formed by a quark anti quark pair)… In addition we know from relativity that mass and energy are really part of the same thing so massless particles like photons also qualify as matter. From this perspective matter is formed from a sea of particles which in turn are just things that occupy a position in spacetime and have some physical properties like mass, electric charge and spin.

standard_model
Particles on the standard model of physics.

As we can see science is good at telling us the structure of matter, which are its constructive blocks, but can’t really answer what matter is. This is a consequence of the scientific method, through which hypothesis are falsified and theoretical predictions are verified. This process ensures that the lasting theory has endured and all the predictions based on it have been verified, this however doesn’t mean that some future prediction may fail meaning a new refinement of the theory is required. As a consequence what science can definitely say is how the real world is not. It can’t be anything that produces falsifiable results, since these results are a definite prove that the world is not how we propose. Thats why science will never be able to answer what anything is, only what it is not.

In this sense matter is not a continuous media, since it is made of discrete pieces (subatomic particles). Matter is not static since these particles are in continuous motion. Matter is definitely not mass since mass is only a measure of a body’s resistance to change of motion, inertia, which is a property of most matter (all except mass-less particles, like photons). Different type of matter interact with each other in different ways by four forces: gravitation, electromagnetism, weak force and strong force.

Discusion

Today, in our scientific worldview most people have a materialistic perspective on nature and life. All matter is made of atoms and all that is or ever has been is made of the particles that constitute the standard model and possible some other particles not jet discovered. This begs the question of whether abstract concepts exist, and by “abstract concepts” here I include such things as chairs and tables, not only truly abstract ones as goodness and happiness. These concepts exist in our minds but not in nature, in a materialistic explanation these concepts don’t really exist they are generated from chemical reactions in our brains the same way awareness arises. And here is were the opposites touch, in a materialistic perspective abstract concepts don’t exist, because the are non-material, from the point of view of idealism or dualism they do exist but in a different “realm of ideas”. Both agree that these concepts are non-material and, in the sense that we use them every day, they undoubtedly must exist in some non-material way. The only difference is whether we disregard this non-material existence as non-existence. So in an ontological way the difference is really not that great as initially expected.

 

Advertisements

Climate Radiation Model

This model is inspired in the model posted by David Evans in his blog page. The model is based in the concept of emission layers of the atmosphere. The different active gases that are part of the composition of the the atmosphere, each emits infra red radiation at characteristic wavelengths and from different atmospheric layers.

The active gases of the atmosphere, sometimes called greenhouse gases, are H2O, CO2, O3 and CH4 in order of decreasing thermal emissions. Apart from the active gases some radiation is emitted directly from Earth’s surface and the top of the clouds through what is called “The atmospheric IR window“, the spectrum to which the atmosphere is transparent in the IR. In David’s nomenclature these 6 possible sinks for the incoming heat are called “pipes”, of these two are of minor importance O3 and CH4, leaving 4 main pipes. Energy can redistribute though the other pipes if one of them gets blocked, as for example by adding CO2.

nimbus-clear
Fig 1. The spectral outgoing long-wave radiation (OLR). Showing the spectral windows of the different gases and the transparent window from which the surface emits. In gray is the blackbody emission of an object at 300K = 23ºC

David does a very good job at summarizing the available data on the highs of emission of the different gases and the top of the clouds, here. The gases are supposed ti be almost black body emitters in the window through which each is active, meaning the emitted energy is only a function of the temperature of the layer of the atmosphere from which the emission takes place. Since the temperature of the atmosphere decreases with altitude (in the troposphere), a higher layer emits less power than one closer to the Earth’s surface.

David’s OLR (outgoing long-wave radiation) model is only concerned on how the variation of various parameters modify the distribution of heat through the pipes, how these parameters may be dependent of the temperature or other independent variables is outside his scope.

Here I am going to layout a thermal model, based in well known physics to try to explain some of these missing relations. The first step is to build a model that fits the data, so to that purpose I am going to use the numbers from David Evans’ post:

  • Lapse rate 6.5ºC/km, surface temperature= 288K
  • Cloud cover = 62%, albedo = 30%, solar constant = 1367.7 W/m²
  • Water emission layer: height=8km, output power = 33%
  • Carbon Dioxide layer: height=7km, output power = 20%
  • Cloud top emission layer: height=3.3km, output power = 20%
  • Methane emission layer: height=3km, output power = 2%
  • Ozone emission layer: height=16km, output power= 5.8%
  • Surface emission layer: height=0km, output power=18.2%

Note: for now I have treated the CO2 as emitting from a constant average hight, I liked David’s treatment of the wights of the spectral emission on this spectrum, and I am planning on taking a similar approach on my next refinement. (End note)

The model uses a 2 surfaces representation of The Earth: surface 0 the ground surface (the origin) and the top of the atmosphere surface which is characterized by the maximum height of the convective Hadley. Temperatures are assumed to be linear throughout the atmosphere, so once the convective overturn is specified and the temperature at the top of the Hadley cell is known, the temperature of any other layer is linearly interpoled. The amount of energy that flows through each pipe is controlled by six additional parameters that represent the spectral width of the different spectral windows for each pipe. In the analogy of flow coming out of a damp through a set of pipes in parallel, these parameters represent the widths of the pipes.  For now these values have been adjusted to fit the percentages specified above, but I pretend to deduce their dependence with the height of the emission layers and the wave-lengths of the windows in the next post of the series.

The complete equations of the model and the values of the different parameters are on the link. The core of the model is equations 41, 50 and 51; representing the energy balance in both regions, the surface and the atmosphere.

clamte model diagram

Fig 2. Model schematic. One surface and one band model. Two balance equations one on the surface and one on the upper atmosphere as a whole. The atmosphere emits from different layers which are at different temperatures

The incoming solar power, modified by albedo, is the heat source of planet Earth and this heat is assumed to be absorbed on the surface. The surface balances the heat by radiation and convection mechanisms. The surface radiates either directly to space (about 18%) or to the clouds, this makes a total of three heat sinks for the surface: the two radiation and the convective mechanism.

Q_{Solar}=Q_{Conv}+Q_{Direct}+Q_{ToClouds}

The atmosphere on the other hand is heated by the surface, through the convention and the radiation to clouds mechanisms, which being heat sinks for the surface, become sources for the atmosphere. The atmosphere is balanced by its own sinks which is the radiation to space from the different active layers: clouds, H2O, CO2, CH4 and O3.

Q_{Conv}+Q_{ToClouds}=Q_{FromClouds}+ Q_{H2O}-Q_{CO2}+Q_{CH4}+Q_{O3}

Each of the radiative emission layers is modeled like so:

Q_i=A_i \epsilon f_i \sigma T_i^4

T_i=T_0-\alpha h_i

where A_i is the surface area, \epsilon is the emittance of the atmosphere (0.996), \sigma is Stephan-Boltzmann constant, T_i is the temperature of the emission layer in K, f_i is the window factor, T_0 is the temperature of Earth’s surface, \alpha is the lapse rate and h_i the height of the emission layer.

The convective heat is modeled as so:

Q_{Conv}=A_0 h_{conv} (T_0-T_1)

The lapse rate is then:

\alpha=(T_0-T_1)/H

where A_0 is the area of Earth’s surface, h_{conv} is the convection film coefficient, T_1 is the temperature at the top of the Hadley convective cell, and H is the height of the convective cell.

The direct radiation to space is then:

Q_{Direct}=A_0 \epsilon f_{direct}(1-c)\sigma T_o^4

where c is the cloud cover and f_{direct} the direct atmospheric window.

The radiation to clouds is:

Q_{ToClouds}=A_0 \epsilon f_{direct}c\sigma T_o^4-A_1 \epsilon f_{clouds}c\sigma T_1^4

with f_{clouds} being the atmospheric window from the top of the clouds and A_1 the surface of a sphere which encompass the convective layer of Earth.

Lastly the solar irradiation is

Q_{Solar}=A_0 G_s/4 (1-a)

Whit G_s as the solar constant and a as the albedo.

The model has then 8 parameters that can be adjusted to fit the experimental data: all 6 window factors, the convective coefficient and the height of the convective cell. These parameters are set by imposing the experimental outgoing power distribution, the experimental mean lapse rate and the surface mean temperature which are a total of 7 restrictions. This leaves an extra degree of freedom which I chose as setting the height of the convective cell as 8.2 km arbitrarily.

There are several problems with the current model, that will be addressed in the next post of the serie:

  1. The temperature of the stratosphere increases with height from the tropopausa at about 10-12 km so the ozone temperature layer is not correct. The actual ozone layer is above 20-30 km high but I chose to leave it at 16km so that it’s temperature not fall drastically when using the linear lapse rate. The stratosphere increases temperature  because the O3 captures part of the UV light from the sun and is heated. In future models I may include this effect.
  2. Although the physical meaning of the window factors is clear, these factors can be deduced mathematically from the temperature of the emission layer and the wavelength interval as the fraction of the Planck distribution at the temperature that is emitted through the window. This will be tried on next model, once done the factor will be linked to the height of the layer, the lapse rate and the surface temperature through the temperature of the layer. The fact that the model has an extra degree of freedom (the height of the convection cell) increases my confidence that once the theoretical window fractions are calculated, which inevitably will be different from those obtained from the adjustment, the model will still fit the experimental data within reason.
  3.   CO2 emits radiation from a whole range of heights in the atmosphere through the weights of the spectral window (see figure 1), the treatment of this feature will be studied. I think it is the result of a lower opacity (larger optical length) of the CO2 at those wavelengths so the solution is only partly lowering the emission height but also the emittance at those wavelengths, since a lower absorption (opacity) will always be accompanied by a lower emittance at a same wavelength (Kirchhoff Law of radiation)

 

This has been a very interesting post for me. I look forwards to the continuation. Any comment, or doubt or correction is welcomed.

 

 

América precolombina

Hace ya tiempo leí el libro Armas, gérmenes y acero de Jared Diamond. Es sin lugar a dudas un libro muy interesante, que propone algunos puntos acerca de la influencia de la geografía en el desarrollo de las civilizaciones y en particular en el resultado de la mayor colisión de culturas en la historia: El encuentro de Europa y América a partir de 1492.

Me encontré con un articulo en Living Anthropologically, un blog de antropología, que hace una critica del libro de Armas, gérmenes y acero, diciendo que el libro si bien presentaba algunas ideas reveladoras, incluso innovadoras, llegaba a conclusiones que contradicen las últimas tendencias en antropología ya que ignora algunas de los descubrimientos de los últimos tiempos.

En este mismo blog recomendaban el libro 1491 : New Revelations of the Americas Before Columbus de Charles C. Mann, como un buen primer contacto con las ideas más actuales de como era América antes del primer contacto y como llegó a serlo. Así que, ya que el tema me resulta interesante compré el libro en versión electrónica y por ahora solo he leído la primera parte. En esta primera parte se introduce el resto de los temas y se analiza en detalle las consecuencias del choque de civilizaciones en América, es decir la evolución de la historia americana tras el descubrimiento de Cristobal Colón, la conquista de Hernan Cortés y de Francisco Pizarro y las colonizaciones inglesas y francesas en América del norte.

Escribo esto antes de terminar el libro ya que las revelaciones que en el estoy encontrando me resultan sorprendentes, esta lleno de detalles interesantes de los primeros encuentros y sobre todo abre los ojos a un sociedad completamente desconocida de la que el publico general tiene ideas muy distintas de las que manejan los expertos. Quizás es un tema que no resulta de interés al publico general o quizás no les resulta interesante a los editores de los medios de comunicación o quizás es políticamente incorrecto una historia que no muestre a los europeos como culpables, pero en cualquier caso parece lamentable que no hayamos oído de estos descubrimientos en ningún momento, siendo muchos de ellos de los 80s y 90s.

Según Diamond Jered, las razones por las cuales los Españoles vencieron a las civilizaciones precolombinas son como el título de su libro indica la tecnología y las enfermedades contagiosas. Su libro explica cómo por circunstancias geográficas se produjeron estas diferencias en las sociedades. La tesis de su argumento basa en:

  • Los antecesores de los indígenas llegaron a América hacia al principio del holoceno al final de la última edad de hielo hace 12000-13000 años.
  • Estos primeros colonos, exterminaron la mayoría de las especies animales de gran tamaño, que posteriormente podrían haber sido útiles para domesticar,debido a que estas especies no conocían al hombre y resultaban extremadamente fáciles de cazar.
  • América es un continente orientado de Norte a Sur por lo que presenta barreras naturales climatológicas a la difusión de especies domesticadas, particularmente plantas.
  • Como consecuencia de los puntos anteriores llegaron mucho mas tarde a la revolución agraria del neolítico.
  • Como consecuencia la aparición de grandes imperios y sociedades complejas fue mucho más tardía.
  • La aparición de nuevas tecnologías también tuvo lugar a un paso más lento debido a la menor densidad de población (menos mentes implica menos individuos excepcionales que a su juicio son los responsables del progreso tecnológico)

Según los avances en antropología genética, descubrimientos arqueológicos y análisis lingüísticos de finales del siglo pasado, muchos de estos puntos estan, actualmente, cuanto menos muy disputados y en muchos casos las evidencias en su contra son sustanciosas.

Por una parte la teoría de una única colonización americana en tiempos precolombinos esta cada vez más comprometida por la cantidad de yacimientos arqueológicos datados hace más 13000 y material genético mitocondrial con mutaciones no presentes en la sociedad indígena. (más en este tema cuando me termine el libro por ahora no tengo detalles)

Por otra parte la aseveración de que la población americana era mucho menor de su coetánea en Europa parece falsa en base a los documentos escritos por europeos al poco de llegar y registros funerarios y restos de enterramientos arqueológicos indígenas. Parece que la población en México (el llamado imperio Azteca o más correctamente la triple alianza) contenia entorno a 25 millones (como mínimo 15) de habitantes y una estimación conservadora de la población del continente americano es de 40 millones de habitantes. Como referencia, en esta época, los países mas poblados europeos eran Francia y Austro-Hungría con 15 y 11.5 millones de habitantes respectivamente, seguidos de Italia y Alemania con 11 y 10 millones.

La estimación de la población inicial había sido subestimada debido a que la mayoría de las poblaciones que encontraban los europeos ya habían sido diezmadas por las epidemias contagiosas como la viruela  antes del primer encuentro. Evidencias de esto son los registros de españoles e ingleses que encontraban poblados enteros desatibados y abandonados, así como cadáveres y miseria por donde quiera que exploraban. La estimación actual es que entre el 95%-97% de la población indígena de todo el continente murió por causa de las enfermedades en cuestión de 100 años.

Debido a que la población que colonizó América hace al menos 13000 años fue pequeña, todos los indígenas presentan una menor diversidad genética y en particular al no haber estado expuestos a enfermedades su paquete de anticuerpos era y aún continúa siendo menos diverso hasta el punto de que la probabilidad de que los anticuerpos de dos personas aleatorias no combatan una misma enfermedad en Europa es un 2% sin embargo en América era en torno a un 30%, esto explica la altísima tasa de mortalidad.

Por último se han encontrado evidencias arqueológicas de civilizaciones tempranas en América  muy anteriores a lo que se había previsto dando a entender que las ciudades y con ello las altas concentraciones de población no empezaron mucho más tarde que en Europa. (más sobre este tema en el próximo post, ya que tampoco he llegado al capitulo en el que se exponen los detalles a este respecto).

Hasta la próxima!!

Para aquellos visitando desde facebook o LinkedIn si os ha gustado el artículo os agradecería entrarais en el blog  para que se registre vuestra visita. Si os ha resultado interesante pulsad like en el post del blog- gracias a todos.

Como siempre cualquier comentario, pregunta o duda es bienvenida.

(perdonad si hay alguna errata, es un poco tarde y no voy a repasarlo)