Apparently we're getting Material 3 soon and it'll be revealed later this month at I/O Google just leaked Android’s new design language I always love when Android refreshes and updates the UI it's like playing and customizing a new toy.
Got a new Pixel Watch 3 and man this thing is a huge upgrade over my Galaxy Watch4 Classic. Looks great, got the larger model so the extra screen real estate is nice, and it performs way better in every aspect. Finally a watch that feels as premium as an apple watch
More material 3 news: Google announces Android 16’s Material 3 Expressive redesign Android 16 in June for pixel devices, Samsung "this summer"" Android 16 Getting Official Release Dates for Pixel and Samsung
Android 16 QPR 1 looks awesome from everything I've seen. So tempted to install but don't want to wipe and rake a chance of breaking any apps.
Scary and cool at the same time haha. Although scraping certain elements of the screen with Javascript and passing data down to a web service that calls an LLM seems to accomplish similar results, and IDEs can already interact with browsers using frameworks like Protractor/Puppeteer/Cypress? Sounds like MCP should make these things easier, though. Interested in reading more about it.
yeah i don’t think it’s new in the sense that it’s offering any new technology, it’s just making a standard that has big players at the helm. should encourage buy-in. it already seems to be doing so. environmental data systems are one example of doing this thirty years ago. openDAP / hyrax / erddap were these kinda forward thinking systems that made it simpler to segment and access satellite data. unfortunately amazon and google have kinda done away with the standards there
For some experimentation fun, I recently tried out some development with an LLM for the first time and was a neat experience. Built a custom chat bot to call an AWS API Gateway pointing to an AWS lambda that read from an S3 bucket for data that would then be used to query a gpt4 model. Was pretty fun. Basically, the S3 bucket stored documentation about every screen in a web app. The chatbot used JS to call into the API gateway with the name of the current screen, the lambda would read any text documents stored under that screen's S3 prefix, and then it would tell the GPT model to answer the user's question based on that provided documentation. Was surprisingly easy, took maybe a week and a half at most.
Heh honestly no idea since it was just a POC. Would have to dig into some costs on that one - my company is fully invested in AWS, so we have all of those resources available, as well as a licensed version of GPT. In terms of AWS resources.. probably pretty minimal aside from the cost of the account itself since lambdas are serverless. GPT, probably much more, but that's just a guess
did it use a RAG package or anything like that to handle the unique documentation? sounds like it was finetuning, using a frozen gpt model? sounds cool either easy. that’s a pretty common use case i could see a lot of businesses being interested in. hey we have all this documentation, we want a chatbot with major guardrails in place that is capable of answering questions only regarding our stuff. like a super intelligent wiki.
I kept it really simple and just passed the instruction and question to a gpt prompt, and told it "Respond 'sorry, I cannot find that information in our documentation'" otherwise, so it wouldn't just answer any possible question unrelated to the app. They were just txt files being read in as strings, so I didn't need to do anything too fancy. I wanted to look into using Llamaindex to help index things more efficiently, but didn't get enough time to try that out. AWS Bedrock would have been cool to play with, too, but we don't have that approved for use yet by our security team
Android 16 is out now for Pixel devices. It does NOT have the new UI change though, that's said to be dropping with the release of Pixel 10 in August/September
Oh damn was not expecting that so soon, even if it doesn't have the new UI features Hope this fixes some of the battery drain issues I started getting one or two updates ago
Large Language Model Performance Doubles Every 7 Months "The capabilities of LLMs are doubling every 7 months. By 2030, it may take them mere hours to do tasks that take humans a full month"