Friday, 27 March 2026

AI assistance for low value tasks only?

 Can AI only help with low valued tasks? This would be bad. Had the idea thinking about the Podcast with Terence Tao where he says that AI makes him 5x faster in non-essential tasks. Tasks he just would not have done without AI. But it worked back then as well. So did he even get faster? AI can not help with the hardest task. These are the high value tasks. It may be a danger that we are drawn into low value tasks then. Or we have more time for the high value tasks. But we had time for them before, as they had high value. A danger is there for sure.

Wednesday, 25 March 2026

Löwenzahnzeit ist die schönste im Allgäu: 10.04. bis 15.04.

 Ich vergesse aber jedes Jahr wann sie ist. Ich gehe gerade meine Bilder durch. 

2025

  • Am 04.04 sind noch keine da
  • Am 15.04. sind richtig viele da
  • Am 19.04. schau es noch gut aus
  • Am 23.04. auch
  • Am 25.04 sehe ich schon einige weiße, aber immer noch das meiste gelb
  • Am 16.05. sind keine mehr da (nur noch vereinzet weiße)
2024
  • 25.03. noch keine da
  • 30.03. schon einige da
  • 02.04. schon schon einige da, aber noch nicht voll
  • 09.04. Maximum
  • 16.04. schon wieder einige weg
  • 04.05. schon recht viele wießen
  • 05.04. nicht mal mehr weiße 
2024
  • 22.04. sind enige da, hier haben wir auch eine Löwenzahn Radtour gemacht. Entsprechend muss es mindestens eine Woche vorher schon da gewesen sein. Keine weißen sichtbar
  • 06.05. Keine mehr da, ein paar vereinzelte weiße.

GPT sagt mir zweiten bis dritten Aprilwoche. In den Bildern sehe ich dass aber auch die erste sein kann. Ich denkemal die erste und zweite, so 10.-15.04. ist ein ganz guter Wert, der zwischen den Jahren sich verändert. 05.04. bis 25.04. als längerer Zeitraum.

DOMS is not Recovery

Delayed Onset Muscle Soreness (DOMS) is not recovery. That much to "listen to your body". In the past I waited until doms is gone. But I can train earlier. Doms is a proxy. But a proxy for what? There is this moment where my muscles reach peak performance after they are recovered, follow by a very slow decay. I need to capture this moment. But confounding factors like sleep or a thousand other ones make measuring strength and thus recovery messy. You get those confounders away via big studies, but not for me as an individual. But it seems that Heart Rate Variablity (HVR) is a good proxy, validated by such studies. But I still have to do my research if this is not just a trend. 

A five minute research tells me to measure only once per day after waking up where there are the least noise causing things in the day, like coffee or stairs. And a chest strap monitor utilizing electrocardiography (ECG), which measures the electrical activity of the heart directly, is more accurate than a Photoplethysmography (PPG) as in a smart watch, which measure blood volume changes optically.

Action: Don't listen to Doms, but measure HRV after waking up.

Tuesday, 24 March 2026

Human as an Agent

 Treat the humans as agents. Build a theory of mind to remember what they can do. Integrate them into a multi-agent system together with other software agents and human agents. Structured output is done via a JSON Schema Auto-Form builder and automatic code generation by a coding agent. Cronjobs are done in an app with push notifications.

In cleanrooms dust particles are huge

On transistors, features are just a few atoms wide. Like here it is 30 atoms wide: https://youtu.be/8DzGp41xcYM?t=203

A typical dust particle is 10,000 and 500,000 atoms wide, so a dust particle spans something like 850 of those features!

Thursday, 19 March 2026

adk cached gemini

from google.adk.models.google_llm import Gemini

class CachedGemini(Gemini):

    _cache: dict[str, list[dict]]

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._cache = SqliteDict('./cache.sqlite', autocommit=True)

    async def generate_content_async(
        self, llm_request: LlmRequest, stream: bool = False
    ) -> AsyncGenerator[LlmResponse, None]:
        cache_key = hashlib.sha256(llm_request.model_dump_json().encode()).hexdigest() + ('_stream' if stream else '_nostream')
        if cache_key in self._cache:
            print("Cache hit for request")
            for cached_response in self._cache[cache_key]:
                yield LlmResponse.model_validate(cached_response)
            return

        events = [] 
        async for llm_response in super().generate_content_async(llm_request, stream):
            events.append(llm_response.model_dump())
            yield llm_response

        self._cache[cache_key] = events
        
root_agent = Agent(
    model=CachedGemini(model=MODEL),
    ...
)

adk callbacks

from google.adk.agents.llm_agent import Agent, BaseAgent
from google.genai import types
from google.adk.agents.context import Context
from google.adk.models.llm_request import LlmRequest
from google.adk.models.llm_response import LlmResponse
from google.adk.agents.callback_context import CallbackContext
from typing import Optional
from google.adk.tools.tool_context import ToolContext
from google.adk.tools.base_tool import BaseTool
from typing import Any



async def before_agent_callback(callback_context: CallbackContext):
    print("Before agent callback triggered")
    print(callback_context)

async def after_agent_callback(callback_context: CallbackContext):
    print("After agent callback triggered")
    print(callback_context)

def before_model_callback(callback_context: Context, llm_request: LlmRequest):
    print("Before model callback triggered")
    print(callback_context)
    print(llm_request)
    
def after_model_callback(callback_context: Context, llm_response: LlmResponse):
    print("After model callback triggered")
    print(callback_context)
    print(llm_response)

def before_tool_callback(tool: BaseTool, args: dict[str, Any], tool_context: ToolContext) -> Optional[dict]:
    print("Before tool callback triggered")
    print(tool.name, args)
    if False:
        return {"result": "Tool execution was blocked by before_tool_callback."}


def after_tool_callback(tool: BaseTool, args: dict[str, Any], tool_context: ToolContext, tool_response: dict) -> Optional[dict]:
    print("After tool callback triggered")
    print(tool.name, args, tool_response)


def get_temperature(city: str):
    """
    A dummy tool to get the temperature of a city.
    """
    temprature = len(city) * 3  # Dummy temperature based on city name length
    return f"The current temperature in {city} is {temprature} degrees Celsius."


root_agent = Agent(
    model='gemini-2.5-flash',
    name='root_agent',
    description='A helpful assistant for user questions.',
    before_agent_callback=before_agent_callback,
    after_agent_callback=after_agent_callback,
    before_model_callback=before_model_callback,
    after_model_callback=after_model_callback,
    before_tool_callback=before_tool_callback,
    after_tool_callback=after_tool_callback,
    instruction='You are a weather assistant. Use get_temperature tool to answer user questions about the weather.',
    tools=[get_temperature]
)

async def main():
    from google.adk.sessions import InMemorySessionService
    from google.adk.runners import Runner

    session_service = InMemorySessionService()

    runner = Runner(
        agent=root_agent,
        app_name="app",
        session_service=session_service, 
    )
    await runner.run_debug("how warm is it in munich?", verbose=True)

if __name__ == "__main__":
    import asyncio
    import dotenv
    dotenv.load_dotenv()  
    asyncio.run(main())

AI assistance for low value tasks only?

 Can AI only help with low valued tasks? This would be bad. Had the idea thinking about the Podcast with Terence Tao where he says that AI ...