Last month and my Blofeld

Over the last year I sold all my musical equipment but a Vintage V100 and a Waldorf Blofeld. Why couldn’t I part with this piece? The thing is, that the Blofeld is just too versatile for the price. To know a little more background, I started selling a lot of my stuff when I realized music will always be just a hobby for me and my financial state doesn’t justify having analogue drum machines and such. For a starter, see the famous jexus:

And my own samples can be heard on this page in the first three tracks.

I think that’s all, really. Anyone who would look up this post could check the technical details anywhere, like how to set up multi mode or activate the LE samples or what does wavetable synthesis mean. Why you are probably here is to listen to what others make with the blofeld.

On another matter, I recommend this talk and the whole channel as well.

Is skepticism just making everything more complex and by applying occam’s razor we can make arguments and theories more simple? Now this is yet another case of the ancient ship of theseus problem. Where do borders lay? When to “stop the razor” if we apply it? Is it only a single step? What is single? What is a number? The thing is, that occam’s principle is just a principle, and whatever reality is, it clearly has no obligation to our principles like this. The above example is the memory skeptic including an unnecessary step between, and the real question is: where does skepticism stop? How do our categories arise through making up parts of our experiences?

Screenshot from 2017-09-23 20-28-54

Although I’m still very deep in my Poidevin translation project I realized that I still have a lot of extra hours I don’t do too much with, so after finishing the primer that was kind of like two years of work (that would really be an overstatement, as I missed a lot of months without working on it at all) I decided to start trying to translate “the” big book of Pali, Warder’s introduction.

It really worths considering to pick up the New Course in Reading Pali instead of the difficult and lengthy Warder book but I decided to delve into grammar rather than being able to read the suttas as soon as possible. Call it personal interest or whatever else, the additional grammatical analyses in the Warder actually interest me. As time goes, I could maybe even give it up and switch to the new reader either way, only time will tell. Just as earlier, I can’t guarantee anything. You can access a draft of the first few pages here.

Otherwise I have a lot of psychological issues these days I’d rather not talk too much about.



Are the laws of physics constant relations between phenomena?

Update (2017.10.28.): An astrophysician’s take on the matter

For a while, I wanted to write this post over a semantical disagreement of the question whenever something similar to “can the laws of physics change” or “could physical constants change over time”. It really boils down to your ideas about time, laws and complexity to answer but let me introduce the whole idea of the topic.

Introduction – Frame of reference

Time translation symmetry (that physical laws don’t change with time) is fundamental to physics (and if you accept the supervenient picture of different fields of the science of matter then to chemistry, biology, and so on as well) and usually taken as an observation, or a measurement – so within the epistemical or positivist context a fact of the world. In my opinion it’s always a problematic part of theories to take facts for granted, there is a reason why different realism theories exist, and I think nobody should or probably would claim that cosmological constants and field equations etc. can’t change with time because of the practical impossibility of induction you can never be sure of any laws derived from measurements.

However physicists often claim that for example cosmology is also a study of such metaphysical (as in, about physics) claims of laws, because measuring the observable universe is also measuring the past of the universe, therefore because of the finite speed of light, measuring how the known or yet unknown laws unfolded within these spacetime boundaries of the cosmos. We can and do use the techniques of Astronomical spectroscopy to measure many properties of distant stars and galaxies, such as their chemical composition, temperature, density, mass, distance, luminosity, and relative motion. It turns out that in all directions of space, no matter how far away, the light coming from distant stars and galaxies was produced within stars by the exact same process of stellar hydrogen fusion. This process works the same throughout time & space and so the stars work the same way throughout time and space. The process of stellar hydrogen fusion depends on a considerable number of physical laws. So by mere observation of the light coming from distant stars and galaxies, as noted by theoretical astrophysicist David N. Spergel[1], we observe that physical laws don’t change with time, that they were constant in the observable past (non-accessible) spacetime.

It’s important to note here that what these findings indeed prove and it seems to be little controversy about, is that the laws of physics didn’t change in the observable past spacetime of our cosmos. Induction really comes to play when we exaggerate these findings into the “future” or rather claim a different thesis saying that based on this, the laws of physics won’t or can’t change over time and it makes sense to judge different scientific hypothesis based on their value of prediction. Point is, that it would be problematic enough to claim otherwise, because the past is already non-accessible for us, there is no change taking place in that spacetime area of the cosmos from our perspective.

Still forming these questions with reference to our past, have physical constants changed with time? The fundamental laws of physics, as we presently understand them, depend on about 25 parameters, such as Planck’s constant h, the gravitational constant G, and the mass and charge of the electron. It is natural to ask whether these parameters are really constants, or whether they vary in space or time. Over the past few decades, there have been extensive searches for evidence of variation of fundamental “constants.” Among the methods used have been astrophysical observations of the spectra of distant stars, searches for variations of planetary radii and moments of inertia, investigations of orbital evolution, searches for anomalous luminosities of faint stars, studies of abundance ratios of radioactive nuclides, and (for current variations) direct laboratory measurements. So far, these investigations have found no evidence of variation of fundamental “constants.” The current observational limits for most constants are on the order of one part in 10^10 to one part in 10^11 per year. So to the best of our current ability to observe, the fundamental constants really are constant.

Light and these laws derived from constants may be taken as a “constant enough” experience but given how thermal equilibrium is a possible outcome (or rather a limiting case of our theories such as black holes that will be mentioned in a moment) for the whole universe, we are only talking about scales. Light may be taken invariant in your scale but what about bigger scales? We could only test our “direct observations” within such a tiny frame of context, that I’d be interested in how would someone go on explaining singularities within a cosmologically constant context?

Based on the cosmic microwave background that is the oldest light in the universe, dating to the epoch of recombination that occurred about 378,000 years after the Big Bang. The current measurement of the age of the universe is 13.799±0.021 billion (109) years. So we have good evidence that electromagnetism, chemistry, gravity and nuclear forces all worked as they do today for a very large part of the entire lifetime of the universe, and indirect evidence that the standard model of subatomic particle physics dates all the way back to when the universe was only one second old. We have measurements from across all of space and time that show all measurements made to date indicate that the laws of physics settled down to their current values very shortly after the Big Bang event.

My objection is really about using our scales, because the age of the universe may seem like a really big number and not a tiny fraction of spacetime, but it’s really nothing compared to for example an estimated time required for a Boltzman brain to appear. The different epochs for example within the context of the Big Bang actually seems to me a proof that these constants can change over time, as new and new formation of matter emerges and influences how other particles behave. What is really an interesting question are the scenarios of our limiting cases here, for example how the big bang event itself and black holes or singularities in general may be such wild “fluctuations” in our theories that may break down or change these laws so rapidly that in case our theories may want to include these phenomena, we need to make them more and more complex. Up until what complexity, and what scale?

It is often claimed that once the universe had expanded enough and hence cooled enough the laws of physics settled down to their current values is not change over time but a change under extreme conditions which only exist very near to singularities. But what’s the difference between change over time and change over extreme conditions if these conditions emerge over time? Under general relativity it should be suspicious to handle space and time as separate coordinates of the events, as when discussing such general theories there shouldn’t be separated space from time, everything that happens in space should happen in a certain time as well. All nature does is natural, is an effect and an action. We shouldn’t make difference between spontaneus nuclear emissions and ones occuring in a laboratory, or initial singularities and other events to this extent that they would let us introduce meta-laws and consider those constant.

Someone could cite quantum fluctuations as an exception for example to everything having a cause, and claim that they apparently have no cause. The real issue here is that (meta-)physical laws are ridden with such exceptions, or limiting cases that are in fact are called singularities or dark matter and such. They show us the applicable range of phenomena, telling us that our laws are not yet proved to be universal, mostly pointing to the future as a hindrance to their predictive power. As the observable part of our universe grows by better measurements, we seem to have more and more general theories of matter because there are fewer of these extreme cases to be seen.

What time translation symmetry claims is that for example the weak electron force would never work differently under the same circumstances. Only the conditions matter, but whether the conditions happen now, or in 100B years, the laws are still constant. Water would have always boiled under the right circumstances and always will, even if we didn’t know about it. The fundamental forces of the universe working differently under near singularity conditions doesn’t mean that the laws change at all. It just means that they are more complex than we initially thought.

Let’s have a little break here with a related interview.

Are there physical laws at all?

Why don’t we include time itself into the conditions of phenomena? If we did, the epochs of big bang may be enough to prove in a certain way that the laws of physics do change. By definition we call relations laws if they have a prediction value that is just another way of saying that they are constant, or believed to be constant. How complex these theories and laws could get? With how complex laws would the so-called extreme conditions disappear? The problem only seems to be semantical because we define scientific laws themselves by these principles, so we could either ask if there are such laws at all. Even the introduction of meta-laws governing the potential change of laws may be themselves made to be laws by the very definition, so we can’t argue about this system from within the frame of these constants.

These are the main concerns of my inquiry into the issue, and they straightforwardly lead us into the metaphysical domain of change (constancy) and time or potentiality (prediction) itself. It’s important to see that where I may discover a potential infinite regression into the domain of meta-laws, it becames not even a theoretical issue of physics because in (semantical) practice there is not a single difference between different laws as far as theory is concerned. They are always the relations that remain the same, and whether you take time into the context of conditions or not, they may either get discarded or become different laws (meta-laws) for different epochs or without time included they may have just extreme conditions (singularities) understood to be exceptions.

The past is in a single way in either physical or mental memories while the now is happening and to understand the nature of change and laws themselves we need to turn into now, not to the past. Not just by definition of laws as constant relations but by definition of past as well, we consider every event that is constant as past. You can frame it in a different ways and consider the past as constant or everything constant as past,  the fundamental difference between now and past is always the same – that past events are not accessible to any (observer) participants of the universe. The need for such definitions is to say the least possible amount about time and change itself and make our work even more complex as it already is.

Are there constant relations at all? For constant relations you may need constant participants (monads, dhammas, matter, particles, etc) or else you risk entering the domain of the formerly called extreme conditions, where the relations become unable to be applicated as we are clearly not stating such universal metaphysical laws such as the most universal statement that “everything exist”. The most universal and constant relation should be existence itself, as everything being related to something can be said to be existent. When we consider physical laws or more strict metaphysical laws they must have a certain domain of application, a set of participants it relates to.

We can probably agree that the set of these participants must be constant enough for at least the law to appear. Electrons for example must be constant enough to produce any effect we can measure and see as a constant relation between either other electrons or different participants – even if just virtual particles. For constant relations we will need constant or identical participants, so the theory of such physical laws will be equivalent to a theory of change that is equivalent to a theory of time.

The difference between constant or identical participants is not much in this framework, because the relation itself gives us the reference to calling something identical as long as the relation is applicable to the participant. In other words I mean that we can call something an electron if it behaves (respects in motion and every measurements) according to our theorized relationship between electrons and different phenomena. The criterion of constancy is the same – such an electron hasn’t changed if it still behaves according to our theory and especially the instantiated attributes gained through measurements.

As could be seen from the last statement on the scale of instantiating for example, quantum phenomena can be theorietically problematic to the extent of John Wheeler postulating a certain idea of the one-electron universe where is only a single instance of subatomic particles existing in different spacetime regions[2].

I don’t want to get into the epistemological debates about the quantum realm as it’s a huge topic in itself, let’s just regard constancy and identity as pretty much the same for our issue of laws themselves. Just how constant could laws be without constant participants? Not only that, but their real predictive value depends on the constancy of the involved sides too, that may change quite a huge amount over spacetime variations. Scaling is very important here because if we consider the option of no change happening ever again after a certain period of time (the well-known heat death scenario) then these laws really govern a tiny spacetime dust of chaos before an eternal constancy. We get to see the laws of a turbulent phenomenomal storm before the real events of a “no change” cosmos with very different and uneventful scenario of physical laws. Could as well just say that we are still in just a different epoch of the big bang.

Applying these to the former discussion for example, for the law of boiling water to be considered constant, the compound known as water should have universal constancy. In the future there may be no such thing as water that would be required for the law to be considered a constant law. Of course even if water suddenly boiled at a different temperate then shifting into meta-laws governing boiling of that compound or straight switching the law itself would not be considered change by semantical grounds so it could be just as well argued that changing participants don’t mean changing laws.

We really came a full circle, because it is against the definition itself of laws to change. But again if we understand that definitions are mind-made categories then it begs the question if such laws even constitute reality or not and we need to reduce them into a concept of nature and/or the participants of the laws themselves to answer such a question. The conservation of energy for example only says after a throughout analysis stripped down to the bare phenomenon, that something does remain constant, there are constant participants constituing reality.

Is physics just applied structural realism?

When we leave the semantical ground behind, it really seems that compounds and matter may change and even disappear but the foundational centre of physics – finding the constant laws of reality could just as well remind us of structural realism, the idea that only relations and therefore structure is real (constitute reality). When I call physics applied structural realism, mathematics itself could become theoretical structural realism but that really depends on one’s view about numbers themselves.

To get to the core of this issue about laws, they are really relations between different participants as stated above, while structure is really just a set of relations either in a system or not. (A system here can be understood as the structure of relations.) A stance on what is constant is really a stance on realism, because any kind of realism in it’s core deals with the question of what is constant, what is unchanged and “at the deepest layer of phenomenas”. In abhidhamma it’s not a surprise that it could be just as well called dhamma realism[3], because according to that framework dharmas constitute the so-called ultimate reality.

It’s interesting to see how philosophical issues are lot of the times considered just a semantic problem by positivists, because we could consider the case of shifting to meta-laws that could be considered already meta-physical (again, as in about physics) for the analitic philosopher, just as Noether’s theorem that is a statement about the relation between laws and constants.[4] The mentioned theorem informally states that to every differentiable symmetry generated by local actions, there corresponds a conserved current. Yet there is a qualitative difference between the universality of such theorems that still have the domain of physical symmetries themselves, while in what is philosophically considered metaphysics and arguments of realisms we usually have the domain of any kind of relations.

Summary – Could be metaphysics enough to decide?

I wrote this post to clarify my thoughts about the issue, not to settle it down. Certainly a long way just to state a relation between physics and structural realism, but lot of the times it may appear that certain applications of our categories and structures have wider universality than they actually do. I didn’t even mention the issue that whenever we talk about nature, we ought to include participants acting according to the knowledge of these laws and humans that I mainly mean here may be agents that decide based on this, leading to the issue of emergence I mentioned in an earlier post on this blog but didn’t write about extensively because of lack of time and research.

Physics is concerned with measurable entities and the constancy of the relations between these entities depend on the constancy of the entities themselves unless we base their constancy on the relations, defining the relata based on the relation, what would be structural realism. In other words if we think that relations are still “there” and real even if there are no relata belonging to their domain, and they don’t emerge out of the entities or processes behaving a certain way, having no base on their own then we put the structure forward to the structured.

In either way, a following discussion should be able to explain the consistency whether the structured or the structure itself, leading to the same debate that would involve a proper theory of change and therefore time itself, that should inevitably touch the subject of transitions and identity, while trying to settle down a fundamental difference between the structure and the structured themselves – if we can talk about relations of relations as well, seeing them as just another type of entities whether having applicable measurements or not and therefore also debating the nature of matter.

Reading Suggestions (#1)

I can’t keep up with weekly or either monthly reading suggestions so here’s just a few interesting articles from the not so distant past that worth looking into. I mostly work on my pali primer translation these days, as I decided to drop the english to pali translation exercises and save some time so I may be able to finish it soonish. The actual reason for my decision will be explained in the final document, if anyone is interested. There was also a question about an improved edition of Mooncorridor that may happen in the future but I have no idea about the exact date.

Moore’s law is ending in the near future as physical measures and more specifically the quantum realm limits the application of transistors and electrical phenomena in general. There’s an interesting and short paper by HP labs, envisioning and thinking about our possibilities entering the next age of computing.

As consciousness becomes a more and more popular topic, so will be emergence (called higher level causation here) more studied by interdisciplinary means, there’s a new post that is a really good introduction written by a very special person working in one of the most important scientific fields of today that is complexity theory. Emergence is in the “heart” of computer science ever since the beginning, with famous examples of the work of J. H. Conway and Stephen Wolfram.

(The mentioned essay: Agent above, atom below)

Emulating or simulating the brain and it’s workings is only a matter of time. There is another ground breaking study from a month ago, of a cognitive topic of metaphors and categorization that may enable building and emulating mind-like behaviour. It’s important to note though that whatever systems we may build, we are simply mimicking (simulating) biology and these ways are not necessarily the way how our minds work (what would require biological emulation) – a compelling argument not to over-anthropomorphize these computing systems.

(For more information you can look up proper emulation projects such as SyNAPSE.)

On Real Life, there is a psychological guide to our online methods of reflecting on the self and building our image from pre-made assets of social media. As most of us use these websites employing algorithmic prediction that configures our access and content for us, it is an interesting question to discover these new categories built on-the-fly for us that may solidify existing norms and thoughts on what may count as self and knowledge as well.

And at last here is just some fun discussion without any further context.

Weekly Reading Suggestions (#3)

Another week has withered away.

With all the movies and clickbait articles, it’s surely a new peak time to be interested in AI. The problem is that some breakthroughs are profound and surely the term and field won’t go away any soon like it did in the 70s. It’s an important task to map the propaganda and talk as clearly as possible. The linked post is the first part of an upcoming series of posts.

I was always interested in conlangs and especially lojban as it’s a very good example to show how far a logical language could go. This article seems to claim how the original goal of lojban has been successfully accomplished just by the mere fact that you are able to make up unknown categories prior to these lojban words. The only question that matters here is that if these categories are invented or discovered. (That should remind you of a very famous debate.)

This article produced after a talk is slightly related to the book mentioned in one of my earlier posts, it’s a computational perspective of analogies and metaphors. Cognitive science is very well aware of the fact that all of thinking is a way to categorize relations, to produce analogies but a deeper understanding of the involved concepts would be difficult. This particular talk is interested in how metaphors shape the way we understand computing and its related problems.

If anyone would be still not familiar with the famous self-referential paradoxes, then this is a kind of good introductory text to start reading the introductory texts about these issues of logical consistency within recursive systems. The article itself is very shallow but I like these topics so after all I decided to include it here.

Recently I found this publisher and they have countless books freely and publicly available to read and download. That is great in itself but the fact that their material is superb I had to mention them if someone would be not aware of their work yet. Not even sure how could I miss out on them until nowadays.

Weekly Reading Suggestions (#2)

A week passed, so I’ll post what I’m reading nowadays.

I’m mostly digesting this book these days, it was more successful in making me believe for a few days that there is only mathematics in reality than S. Wolfram did so with his book of physics introducing his idea of an universal computation. The author seems to be successful at reducing such notions as analogies or meaning into patterns and ideas found in abstract algebra, that is probably extremely fruitful in cognitive sciences and designing artificial systems mimicking our understanding of the world. Highly recommended, even though I’m not finished yet.

This article from 2013 has surprising information about the inner working of recent hard disks, and one of the reasons I’m highly disillusioned with technology for years. The usage of universal tools should be a controversial matter on their own (see early cybernetics or papers authored by Neumann himself) but the practical issues are only felt nowadays. The fact that there is an entire, more than capable computer in your hard disk should raise awareness to the privacy, security and resource management problems involved.

The era that could be the golden age of philosophy (with so many theoretical debate and crisis going on) is actually naturalizing it, we are seeing philosophy absolving into different interdisciplinary fields. Not a surprise that as philosophy is getting reduced into metaphilosophy, we may wonder how is it even possible, when such a field should include it’s own meta-field being unique in that way. There is a volume I found accidentally at a second hand bookstore about this exact issue, containing the material of a conference held in 2015 in my country.

There’s two nautilus articles that may not be recent enough but very well worth reading, they are both a good way to destroy our paradigms for a few mere moments.

A lot of theories are “bounded” by expectations, they are not more than made up and fine-tuned axioms to explain already existing phenomena. That is very much the case with physics, and it’s a fundamental question to ask how much of physics should be driven by our own experience and our own expectations of reality itself.

Related are the series of posts by M. Pigliucci on his blog that actually made me follow him two years ago.

It sounds like some sci-fi plot but whenever “alien” intelligence is being questioned we must be aware of how alien structures we are talking about. Clearly swarm intelligence is a thing, algae and ants solve problems that would be impossible in every single way concevied individually. Their “algorithms” lie elsewhere than in the insect body itself, so it’s an interesting thought experiment to ask if alien life may be physics itself. (Something that is even mentioned in my novel idea I will never finish, but there are other novels strongly connected to this idea.)

As a bonus, this paper is really the conclusion and pinnacle of all the Gödel-Intelligence-Logic debates that are so fashionable nowadays.

Pali Primer (magyar fordítás, páli nyelvkönyv)

3 years ago I started translating De Silva’s Pali Primer, but never finished it. Slowly I’m working on it again, the original blog where I was working on it can be found here.

Currently I’m at the 12th lesson, so roughly 1/4 of the book is done so far, that is not much for 3 years for sure but I hardly worked on it as the years passed. It’s a quite concise introductory to the pali language, sometimes way too concise that makes the translation kind of difficult for me.

Just as an example it feels like that the book introduces possessive forms in the exercises before even teaching them properly (their dogs, his field, etc) besides the genitive case. I finally also found a key with answers so I can help myself and the translation by being sure of the right answers.

No excuses though, I’m working on it again. Please don’t mind the design or rather the lack of thereof, I will make a proper and consisent look once I’m at least half-way done.
As of August in 2017, I’m like 99.99% done with the translation, currently dealing with design issues only and also trying to find any last mistakes.

Last update: 2017.08.08.
My work is close to being finished and I contacted the original publisher to settle down any issue with copyright. Meanwhile I don’t host the translation.
Last update: 2017.09.12.
Temporarily you can access the translated book here, but please keep in mind it isn’t finished by any means.

Weekly Reading Suggestions (#1)

Originally I wanted to make a comment on another blog but it didn’t get approved so I may start writing suggestions. There are several ones I want to suggest but as a rule I will try to use contents not older than a year.

The presupposed idea that evolution favors accurate representation of the world seems uncontroversial to a lot of people but an experimental simulated population research shows it’s a complex topic that shows that the accurateness of a given perception method may be not as important for survival as supposed by a lot of us.

Although being almost a year old, this paper shows that something is deeply wrong with the tools and framework of modern neuroscience because the field is not even capable of mapping such a well-documented processing unit as the MOS6502.

SEP summaries are concise and useful a lot of the times but for the history of the so called “free will” debate and ideas, I haven’t seen a better summary as of yet. The rest of the website is just as interesting with a lot of analytic philosophy related articles and opinions.

One of my favorite suttas about the futile attempt to theorize everything, neglecting our actual life and the practical approach to everyday experiences.

Last year there was another Adam Curtis movie released.

There is a growing number of articles describing the controversial act of preserving artistic works in a digital context, something that has been famously debated in the 1972 television series Ways of Seeing.