Monday, 30 March 2026

The times 4 5 hundred rule: Calculate private required capital goals fast

 Assume you get 7% capital gain and 3.5 % inflation and 25% tax. Then calculate you required capital with the factor of 38 (1 / (0.07-0.035) / (1-0.25)) of doing it fast in the head times 40 (times 4, add a zero). This is for the yearly capital requirement. Calculate what you need in one month times 456 to get the capital required for a payout each month (times 400 and times 500 and take the middle).

E.g. I need 3000 per month. it is 1.2M (x400) and 1.5M, so 1.35M capital to get it. 

The times 4 5 hundred rule.


Btw, to get to million, input not 3000 but 3, and do times 4 and times 5 and divide by ten at the end (times 0.45)

Seeing sense in its own death is rationalization

 Coping with its own death is one of the biggest problems. It is one the biggest values of religion. It was a big argument for fascism, where you continue to exist as part of the bigger collective. Recently, I noticed that more and more people state that death is the giver of meaning of sense for live. This sounds like the ultimate rationalization. Since we killed god, we carry the burden of finding sense in live and justifying suffering. Right after we killed him, we tried things lice fascism where the sense is the nation and the suffering is the means to an end. It failed horribly. We still have no answer. Rationalizing death as the giver of sense achieves two things at once. Coping with death and finding sense. I can not see which sense, but this is another story. But I don't see how this would work. If you have so much sense, why not die next week? It gives you the ultimate sense in live. Death is too near? So you want to not die. Just ask the same question next week. Either you want to never die or your live and sense in live fades away so slowly that you just want to die. Where is the sense there? Like with religion, seeing sense in death may be a useful illusion, but it is an invented story we tell ourself. It would be better not to waste time with self illusions and take the won energy to figure out a sense in live and just go with that.

Friday, 27 March 2026

AI assistance for low value tasks only?

 Can AI only help with low valued tasks? This would be bad. Had the idea thinking about the Podcast with Terence Tao where he says that AI makes him 5x faster in non-essential tasks. Tasks he just would not have done without AI. But it worked back then as well. So did he even get faster? AI can not help with the hardest task. These are the high value tasks. It may be a danger that we are drawn into low value tasks then. Or we have more time for the high value tasks. But we had time for them before, as they had high value. A danger is there for sure.

Wednesday, 25 March 2026

Löwenzahnzeit ist die schönste im Allgäu: 10.04. bis 15.04.

 Ich vergesse aber jedes Jahr wann sie ist. Ich gehe gerade meine Bilder durch. 

2025

  • Am 04.04 sind noch keine da
  • Am 15.04. sind richtig viele da
  • Am 19.04. schau es noch gut aus
  • Am 23.04. auch
  • Am 25.04 sehe ich schon einige weiße, aber immer noch das meiste gelb
  • Am 16.05. sind keine mehr da (nur noch vereinzet weiße)
2024
  • 25.03. noch keine da
  • 30.03. schon einige da
  • 02.04. schon schon einige da, aber noch nicht voll
  • 09.04. Maximum
  • 16.04. schon wieder einige weg
  • 04.05. schon recht viele wießen
  • 05.04. nicht mal mehr weiße 
2024
  • 22.04. sind enige da, hier haben wir auch eine Löwenzahn Radtour gemacht. Entsprechend muss es mindestens eine Woche vorher schon da gewesen sein. Keine weißen sichtbar
  • 06.05. Keine mehr da, ein paar vereinzelte weiße.

GPT sagt mir zweiten bis dritten Aprilwoche. In den Bildern sehe ich dass aber auch die erste sein kann. Ich denkemal die erste und zweite, so 10.-15.04. ist ein ganz guter Wert, der zwischen den Jahren sich verändert. 05.04. bis 25.04. als längerer Zeitraum.

DOMS is not Recovery

Delayed Onset Muscle Soreness (DOMS) is not recovery. That much to "listen to your body". In the past I waited until doms is gone. But I can train earlier. Doms is a proxy. But a proxy for what? There is this moment where my muscles reach peak performance after they are recovered, follow by a very slow decay. I need to capture this moment. But confounding factors like sleep or a thousand other ones make measuring strength and thus recovery messy. You get those confounders away via big studies, but not for me as an individual. But it seems that Heart Rate Variablity (HVR) is a good proxy, validated by such studies. But I still have to do my research if this is not just a trend. 

A five minute research tells me to measure only once per day after waking up where there are the least noise causing things in the day, like coffee or stairs. And a chest strap monitor utilizing electrocardiography (ECG), which measures the electrical activity of the heart directly, is more accurate than a Photoplethysmography (PPG) as in a smart watch, which measure blood volume changes optically.

Action: Don't listen to Doms, but measure HRV after waking up.

Tuesday, 24 March 2026

Human as an Agent

 Treat the humans as agents. Build a theory of mind to remember what they can do. Integrate them into a multi-agent system together with other software agents and human agents. Structured output is done via a JSON Schema Auto-Form builder and automatic code generation by a coding agent. Cronjobs are done in an app with push notifications.

In cleanrooms dust particles are huge

On transistors, features are just a few atoms wide. Like here it is 30 atoms wide: https://youtu.be/8DzGp41xcYM?t=203

A typical dust particle is 10,000 and 500,000 atoms wide, so a dust particle spans something like 850 of those features!

Thursday, 19 March 2026

adk cached gemini

from google.adk.models.google_llm import Gemini

class CachedGemini(Gemini):

    _cache: dict[str, list[dict]]

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._cache = SqliteDict('./cache.sqlite', autocommit=True)

    async def generate_content_async(
        self, llm_request: LlmRequest, stream: bool = False
    ) -> AsyncGenerator[LlmResponse, None]:
        cache_key = hashlib.sha256(llm_request.model_dump_json().encode()).hexdigest() + ('_stream' if stream else '_nostream')
        if cache_key in self._cache:
            print("Cache hit for request")
            for cached_response in self._cache[cache_key]:
                yield LlmResponse.model_validate(cached_response)
            return

        events = [] 
        async for llm_response in super().generate_content_async(llm_request, stream):
            events.append(llm_response.model_dump())
            yield llm_response

        self._cache[cache_key] = events
        
root_agent = Agent(
    model=CachedGemini(model=MODEL),
    ...
)

adk callbacks

from google.adk.agents.llm_agent import Agent, BaseAgent
from google.genai import types
from google.adk.agents.context import Context
from google.adk.models.llm_request import LlmRequest
from google.adk.models.llm_response import LlmResponse
from google.adk.agents.callback_context import CallbackContext
from typing import Optional
from google.adk.tools.tool_context import ToolContext
from google.adk.tools.base_tool import BaseTool
from typing import Any



async def before_agent_callback(callback_context: CallbackContext):
    print("Before agent callback triggered")
    print(callback_context)

async def after_agent_callback(callback_context: CallbackContext):
    print("After agent callback triggered")
    print(callback_context)

def before_model_callback(callback_context: Context, llm_request: LlmRequest):
    print("Before model callback triggered")
    print(callback_context)
    print(llm_request)
    
def after_model_callback(callback_context: Context, llm_response: LlmResponse):
    print("After model callback triggered")
    print(callback_context)
    print(llm_response)

def before_tool_callback(tool: BaseTool, args: dict[str, Any], tool_context: ToolContext) -> Optional[dict]:
    print("Before tool callback triggered")
    print(tool.name, args)
    if False:
        return {"result": "Tool execution was blocked by before_tool_callback."}


def after_tool_callback(tool: BaseTool, args: dict[str, Any], tool_context: ToolContext, tool_response: dict) -> Optional[dict]:
    print("After tool callback triggered")
    print(tool.name, args, tool_response)


def get_temperature(city: str):
    """
    A dummy tool to get the temperature of a city.
    """
    temprature = len(city) * 3  # Dummy temperature based on city name length
    return f"The current temperature in {city} is {temprature} degrees Celsius."


root_agent = Agent(
    model='gemini-2.5-flash',
    name='root_agent',
    description='A helpful assistant for user questions.',
    before_agent_callback=before_agent_callback,
    after_agent_callback=after_agent_callback,
    before_model_callback=before_model_callback,
    after_model_callback=after_model_callback,
    before_tool_callback=before_tool_callback,
    after_tool_callback=after_tool_callback,
    instruction='You are a weather assistant. Use get_temperature tool to answer user questions about the weather.',
    tools=[get_temperature]
)

async def main():
    from google.adk.sessions import InMemorySessionService
    from google.adk.runners import Runner

    session_service = InMemorySessionService()

    runner = Runner(
        agent=root_agent,
        app_name="app",
        session_service=session_service, 
    )
    await runner.run_debug("how warm is it in munich?", verbose=True)

if __name__ == "__main__":
    import asyncio
    import dotenv
    dotenv.load_dotenv()  
    asyncio.run(main())

Wednesday, 18 March 2026

Consume stuff to learn things. Does this makes things interesting?

 It makes sense that we are interesting in consuming stuff to learn stuff. So we survived, so we survive more likely. The problem with ai generated content, that we can not learn from it. In the digital age we can not learn much from content produced by others, but there is at least minimal signal there. I can observe how a lake looks like in Sweden when it starts to rain. Useful? No idea. But our brain was wired long before that. But with AI, I know that the information is worthless, no signal. Maybe this is the reason why it feels empty. It is ironic, as there might be a signal, carried through the training. But maybe it is hallucinated. No idea. It feels empty, not interesting. If I see the same shot, one which I know is real and the other is AI, one feels more interesting. Maybe this is a old useless brain talking here.

Btw, I dont care about the effort. If someone took a 4k image, opened paint and copied it pixel by pixel with a 1x paint tool, it takes forever (450 work days / 2 work years with 1.5s per pixel) but I find the result not more interesting.

partial orders and arenas

 When I first learned about partial orders I was confused and did not see where this would be useful. It was introduced in the realm of numbers. Many years later, I love the idea. Arenas are everywhere. A user sees 20 items. He selects one. Again and again. A partial order emerges. 

Note: This assumes that humans are perfectly rational, which is wrong, the intuition still holds and we can use proper methods in practices.

Tuesday, 17 March 2026

zahnpflege

 

Phase 1: Häusliches Biofilm-Management (Der primäre Hebel)

Da zahnmedizinische Präventivmaßnahmen ohne eigene Verhaltensänderung laut Report keinen messbaren Langzeitnutzen aufweisen, liegt das kurative Potenzial fast vollständig in Ihrer eigenen Routine.

  • Mechanische Disruption: Regelmäßige Nutzung von Zahnbürste (manuell oder elektrisch-oszillierend zeigen hier keinen klinisch relevanten Unterschied) sowie zwingend Hilfsmittel für den Interdentalraum (Interdentalbürsten, Zahnseide, Zahnhölzer).
  • Vermeidung mechanischer Traumata: Der Report widerlegt zwar die monokausale Entstehung von Zahnfleischrückgang (Gingivarezessionen) durch falsches Putzen, klassifiziert aber erhöhte Putzkraft, harte Borsten und horizontale Schrubb-Techniken als relevante Kofaktoren für Gewebeverlust.
  • Chemische Modulation (Karies): Zwingender Einsatz von Fluoridzahnpasta. Die rein mechanische Plaqueentfernung reicht zur Kariesprävention nicht aus; Fluorid wird für die Remineralisation, die Bildung von säureresistentem Fluorapatit und die Hemmung des bakteriellen Metabolismus (Enolase-Inhibition) zwingend benötigt.

Phase 2: Beauftragung der zahnärztlichen Dienstleistung (Primärprävention)

Der Report dekonstruiert den Mythos der "professionellen Zahnreinigung" als passiv zu konsumierende Gesundheitsmaßnahme für parodontal Gesunde. Die Behandlungszeit muss auf Edukation umverteilt werden.

1. Streichung starrer Intervalle:

  • Anweisung: Ablehnung routinemäßiger, kalenderbasierter Termine (z. B. strikt alle 6 oder 12 Monate).
  • Begründung: Laut Metaanalysen macht die routinemäßige zahnärztliche Reinigung bei gesunden Erwachsenen hinsichtlich der Inzidenz von Gingivitis, der Reduktion von Sondierungstiefen und der Plaquewerte nach zwei bis drei Jahren "nur geringe oder gar keine messbaren Unterschiede" im Vergleich zu einer rein bedarfsgerechten Behandlung. Die Indikationsstellung muss rein risikobasiert erfolgen.

2. Fokus der Sitzung (Mundhygieneinstruktion - MHI):

  • Anweisung: Die Behandlungszeit ist primär der Kontrolle der Putztechnik, der psychologischen Motivation und der intensiven Schulung in der häuslichen Nutzung von Hilfsmitteln (Interdentalraumbürsten) zu widmen.
  • Begründung: Die PMPR hat bei gesunden Erwachsenen ohne begleitende MHI keinen signifikanten Zusatznutzen. Die professionelle Reinigung fungiert hier primär als "pädagogisches, instruktives und motivierendes Vehikel".

3. Diagnostik und Navigation:

  • Anweisung: Das Anfärben der Plaque (Plaque Disclosure) vorab ist obligatorisch.
  • Begründung: Dient der Visualisierung für Ihre häusliche Optimierung und verhindert, dass der Behandler saubere Zahnflächen instrumentiert.

4. Instrumentelle Durchführung (Die Reduktion auf das Nötigste):

  • Zahnschmelz: Auf intaktem, gesundem Zahnschmelz verursachen Ultraschall, Handküretten und Air-Polishing (mit Erythritol) keinen messbaren Gewebeverlust. Die Instrumentenwahl ist hier biologisch unkritisch.
  • Zahnstein (Calculus): Die Entfernung mineralisierter Konkremente ist bei parodontal Gesunden laut Report "höchst ungewiss" in ihrer klinischen Relevanz für den Zahnerhalt, da Zahnstein ohne vitalen Biofilm keinen Attachmentverlust initiiert. Es handelt sich um eine "untergeordnete, kosmetisch-hygienische Ergänzung".
  • Kritische Restriktion (Zahnhalsfüllungen): Sofern Sie zervikale Kompositrestaurationen (Füllungen am Zahnhals) besitzen, müssen Ultraschall und Pulverstrahlverfahren in diesen Bereichen strikt untersagt werden. Sie erhöhen die Oberflächenrauigkeit des Komposits signifikant (Faktor 1,5 bis 2) und provozieren dadurch Sekundärkaries.

5. Abschluss der Behandlung:

  • Anweisung: Verzicht auf die Politur mit Gummikelch und Prophylaxepaste, falls zuvor Air-Polishing angewendet wurde, da sie keinen topografischen Vorteil (Rauigkeitsreduktion) bietet.
  • Anweisung: Zwingende lokale Applikation von Fluorid (Lack oder Gel) zur chemischen Kariesprävention, da die mechanische PMPR allein hierfür nicht ausreicht.

Einkaufsliste
  • Zahnpasta - alle die Florid deklarieren sind gut genug
  • Weiche bis max Mittel Zahnbürste
  • Interdentalraumbürsten, wobe ich die größe durch ausprobieren finden muss
  • Zahnseide wo die bürste nicht hin kommt. 

Friday, 13 March 2026

Pesion

Goalpost 1: 350k, so have my pension
Goalpost 2: 1.2M, to live from my money alone right now and forever

I need 1.08m for 30 years of pension. 
Thus I need 350k 32 before that (3.5% roi - because of inflation)
I need 4k per month on average right now until the end. 
Lets say I can withdraw 4% yearly, I need to withdraw 48k per year, thus I need 1.2M right now.

Diagnistic Reasonign is an interative Information Retrieval Task

 We take ALL known Etiologies and use differential diagnosis to filter out as much as possible using the patient presentation and rank the rest. We take action for max learning, update the patient file, filter and rank. It is iterative information retrieval across multiple databases, including the head and body of the patient as query-able database. All Nails I guess.

Wednesday, 4 March 2026

Humans are Search Engines and Agentic Search connects fuzzy knowledge graphs

Human are query-able search engines. Text in, Text out. 

Each search engine is a fuzzy knowledge-graph.

Agentic Search can connect to many different search engines in an agentic loop for deep research. Including humans.

Tuesday, 3 March 2026

I need 400ckal/h on my bike

 


Calculated by Gemini, not double checked. Seems low.

Gravel Cycling Energy Model


Caloric Consumption: 0.0 kcal/h
Total Power: 0.0 W

Breakdown:

  • Aerodynamic Power: 0.0 W
  • Rolling Resistance: 0.0 W

Sunday, 1 March 2026

Just use OpenSearch

1. LLM-Query Compatibility & Representational Density

For autonomous agents, "ergonomics" are a human distraction. The primary metric is Representational Density in LLM training corpora. OpenSearch (ES 7.10 fork) utilizes a JSON-based DSL that is the most documented search interface in history.

  • Zero-Shot Reliability: Agents generate nested bool queries (must/filter/should) with significantly lower hallucination rates compared to Vespa’s YQL or Solr’s XML-adjacent syntax.
  • Deterministic Error Handling: Structured JSON error responses allow agents to parse stack traces and auto-correct query syntax in multi-stage reasoning loops.

2. Lexical Primacy & Late Interaction (ColBERT)

Medical IR demands exact-match precision for biochemical entities. OpenSearch provides unadulterated BM25 control, avoiding the "black-box" typo-tolerance found in vector-first databases.

$$score(D, Q) = \sum_{q \in Q} IDF(q) \cdot \frac{f(q, D) \cdot (k_1 + 1)}{f(q, D) + k_1 \cdot (1 - b + b \cdot \frac{|D|}{avgdl})}$$

The ColBERT Advantage: Unlike standard bi-encoders that compress abstracts into a single vector, OpenSearch 3.x supports multi-vector Late Interaction. Using the MaxSim operator, the engine preserves token-level nuances (e.g., "Inhibitor X" vs. "Protein Y") that are often lost in 1536-dimensional averages.

3. Single-Node Batch Efficiency

With a 12M document corpus and monthly batch updates, we optimize for Read-Heavy Static Segments over real-time mutability.

Metric OpenSearch (Lucene) Vespa (C++) Manticore (SQL)
Memory Strategy OS Page Cache + 32GB Heap Tensors / Mmap Columnar / Disk
Latency (Agentic) < 5s (Complex Hybrid) < 1s (High Throughput) < 2s (SQL Joins)
Lindy Effect High (Established Standard) Medium (Enterprise-Niche) High (Sphinx Heritage)

By setting index.refresh_interval: -1 and index.number_of_replicas: 0 during ingestion, OpenSearch builds contiguous Lucene segments that maximize hardware utilization without the overhead of distributed consensus.

4. Licensing Stability & Governance

In a landscape of "corporate rug-pulls," OpenSearch (Linux Foundation) provides the highest resistance to licensing shifts. Unlike venture-backed alternatives (Weaviate/Typesense) or commercial-pivots (Vespa.ai), OpenSearch remains a community-governed Apache 2.0 utility.

OpenSearch is the optimal choice because it treats code as a liability and cognitive efficiency as a priority. It offers the best blend of Lexical Rigidity, Agent Compatibility, and Operational Insurance.

Iterative Mona Lisa E2E development

Iterative as at the beginning I know least Mona Lisa, as I always want to be able to stop E2E, go wide before going deep Notes: Agents are i...