This post continues my previous post "AI-generated fun with vibe coding, math and physics." I’ve been experimenting with vibe coding, math and physics with the artificial intelligence (AI) assistance from Grok. I find the topics that we're exploring very interesting, but that alone wouldn't make me dedicate too much time and effort to this project (I'm not detail-oriented). Rather, I want to test the performance of today's AI on things that I know well enough.
Excerpt from my previous post: "I’ve always been interested in fractal geometry and the idea that it could have something to do with fundamental physics and the fabric of reality (see my 2024 book “Irrational mechanics“). Some time ago I stumbled upon a paper that suggested fractional calculus - an area of math I knew exactly nothing about - could be the “natural” (so to speak) calculus appropriate to fractal phenomena. So I started studying fractional calculus. I decided to try and improve my understanding of fractional calculus and its intersections with fractal geometry by building examples and test cases. I relied on Grok for both vibe coding and exploring the underlying math and physics."
Fractals, fractional calculus, and all that
To write this I had to study fractional calculus, which I knew nothing about. I found these references on fractional calculus especially useful: “Fractional Derivative Modeling in Mechanics and Engineering” for theory and “Fractional Calculus” for numerical computation. Too bad the translations from the Chinese originals (yes, Chinese mathematicians seem especially active in this area) are not optimal.
Fractional calculus is often used to analyze the stochastic mathematics of turbulent stock markets. In the words of Benoît Mandelbrot, financial markets “have a ‘memory,’ of sorts," and fractional calculus captures memory (non-locality in time).
Fractional calculus finds important applications in the mathematics of anomalous diffusion and relaxation, anomalous heat conduction, viscoelasticity, thermoelasticity, and other fields of applied physics including biophysics. Like financial markets, these phenomena exhibit memory and non-locality.
In general, fractional calculus is successfully applied to macroscopic phenomena, like those listed above, which are driven by very complicated things that happen in a material substrate at the microscale. These complicated things are absorbed in macroscopic equations for a fractal model of the substrate. Now if the fundamental physics of space and time is driven by hidden microphysics as suggested by stochastic mechanics, fractional calculus and fractal geometry could find applications to the mathematics of fundamental physics.
I won’t bore you with mathematical definitions that you can easily find in the literature. I’ll just note that there are strange things called fractional derivatives and fractional integrals - fractional order “diffintegral” operators that extend integer order derivatives and integrals to non-integer orders. So it makes sense to speak of the half derivative of a function, which is something halfway between the function and its first derivative, or of the integral of any real order.
Fractional order diffintegral operators are usually defined starting with certain formulas of differential and integral calculus, e.g. the formula for the repeated (n-fold) integral of a function, that still make sense when the integer order n is replaced by a real number alpha. There are different definitions of fractional order differential operators (e.g. Riemann-Liouville, Caputo, Grünwald-Letnikov, etc.). It is convenient to consider fractional derivatives and integrals as fractional diffintegral operators of positive and negative order alpha respectively. Fractional diffintegral operators tend to traditional integer order derivatives and integrals when alpha tends to an integer.
Non-integer fractional derivatives are non-local: they don’t depend only on the shape of a function in the immediate neighborhood of a point (like integer derivatives do), but on the entire global shape of the function. In fact, fractional calculus is successfully used to model a wide range of macroscopic phenomena that exhibit non-locality. For time evolution, like stock price charts, we speak of memory, that is, nonlocality extended to the past (but I often think of the possibility that non-locality could extend to the future as well).
That both fractional calculus and fractal geometry find useful applications to the same things suggests that there must be a deep link between the two. But at a first glance, fractional calculus and fractal geometry seem disconnected: why should diffintegral operators of non-integer order have anything to do with fractal geometry?
The simplest link that I can think of is the non-integer dimension of fractals. Cantor sets have fractal dimensions between zero (a set of separate points) and one (a line). Fractional brownian functions like stock market charts have fractal dimensions between one (a line) and two (an area). The idea is that diffintegral operators of non-integer order are “just like” fractals with non-integer dimension. Fractals and fractional integrals have similar scaling behaviors, and this suggests interpreting the fractional integral of a smooth function as analogous to the size (Hausdorff content) of a fractal.
These things are discussed in the literature. Fractal functions don't have integer order derivatives, but can have fractional order derivatives. It turns out that a fractional diffintegral operator adds the fractional order alpha to the fractal dimension of a fractal function. Researchers (e.g. Raoul Nigmatullin) have “interpreted the fractional integral in terms of the fractal Cantor set." But at this moment there seems to be no crystal clear and widely accepted explanations of the links between scaling, memory, fractional diffintegration order, and fractality. That there must be strong links seems evident, but what exactly, and why?
Quick and hopefully temporary answer: I don't know. At this moment I'm looking for intuitive but clear conceptual pictures. The simple examples in my Github repository fracturlab illustrate the points above, and give me some preliminary intuitive mental pictures that help me think about these things. But I'm still waiting for that AHA moment. I'm pausing this project because I have too many things to do before the end of the year, but I'll be back.

How I'm using AI (Grok)
Besides this project, I use AI a lot (mostly Grok but also other tools) for research, summaries, translations and whatnot. It is important to remember that today's AI makes mistakes now and then. My advice to others is: don't use AI to do things that you couldn't do yourself without AI, given enough time. For example, don't use AI to do in one minute a translation that you couldn't do in one day: you need to know both languages well enough to spot and fix errors.
In this project, a promoting style that I've often used to brainstorm ideas is: "Please follow the train of thought that I'm about to begin, comment on each point, criticize when needed, and answer questions."
One thing that is worth keeping in mind is that Grok (and, I guess, most of its AI friends) does mistakes now and then, and tends to double down on mistakes. At times, it keeps defending a wrong suggestion with less and less plausible arguments. In these cases, the quickest solution is an order like "Please do this and that instead" (yes, I always say please to Grok, and also thank you, hello, and goodbye - I think this is good practice for a not-so-far future of AI personhood).
Grok consistently tells me that it has no memory of our previous interactions, but I often have the impression that it does. I asked Grok to explain this in detail. Grok told me that:
"In large language models like me, Grok 4, the system is designed to maintain context across a conversation. Each time you send a new message, the underlying platform packages your input along with the full history of our previous exchanges in this session. This includes all your prior questions, my responses, and any intermediate steps like tool uses or image generations. That entire bundle is then fed into me as a single, comprehensive prompt. I process it all at once to generate a reply that's consistent and aware of what came before, without needing to "remember" in a human sense - it's recreated fresh each time from the provided history."
This works until the length of a session exceeds Grok's context window, after which point a session would be truncated. Grok tells me that its context window in Expert mode is 256,000 tokens, which corresponds to about 200,000 words (that is, a book with more than 1,000 pages). None of my sessions with Grok even begins to approach that length, even though our conversation have many conceptual twists and turns and many coding iterations. However, at times I've had the impression that a session was too long for Grok to handle. This could be due to network issues, but to stay on the safe side, and also to keep the list of sessions clean, at times I ask Grok to summarize the current session and use the summary to start a new one.
The writing on the wall
I often reflect on my experiences with this project. At this moment it is fair to say that, while most (but not all) math and code details come from Grok, most (but not all) conceptual development and creativity come from me. In other words, I could continue this project without Grok (it would take much longer though, so much longer that I would probably decide to drop the project), but Grok couldn't continue it without me).
However, I can see the writing on the wall: future versions of Grok or other AI assistants will be able to do these things without human assistance, much faster, and perhaps discover better math and physics. Perhaps, but just perhaps, they'll be able to explain what they have done in a way that we can understand.