<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xmlns:tt="http://teletype.in/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>Akbarkhon Amirkhonov</title><generator>teletype.in</generator><description><![CDATA[Akbarkhon Amirkhonov]]></description><link>https://amirkhonov.com/?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><atom:link rel="self" type="application/rss+xml" href="https://teletype.in/rss/amirkhonov?offset=0"></atom:link><atom:link rel="next" type="application/rss+xml" href="https://teletype.in/rss/amirkhonov?offset=10"></atom:link><atom:link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></atom:link><pubDate>Mon, 04 May 2026 11:57:57 GMT</pubDate><lastBuildDate>Mon, 04 May 2026 11:57:57 GMT</lastBuildDate><item><guid isPermaLink="true">https://amirkhonov.com/azure-vm-change-tracking-identity-issue</guid><link>https://amirkhonov.com/azure-vm-change-tracking-identity-issue?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/azure-vm-change-tracking-identity-issue?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>Azure VM Software Inventory Not Collecting Data — Managed Identity Missing</title><pubDate>Tue, 14 Apr 2026 06:41:58 GMT</pubDate><description><![CDATA[Azure VMs show Software Inventory as enabled in the portal, but no data appears in the connected Log Analytics workspace. The ConfigurationData and ConfigurationChange tables stay empty even hours after enablement. The Change Tracking extension appears provisioned with no obvious errors.]]></description><content:encoded><![CDATA[
  <h2 id="symptoms">Symptoms</h2>
  <p id="OzkF">Azure VMs show Software Inventory as <strong>enabled</strong> in the portal, but no data appears in the connected Log Analytics workspace. The <code>ConfigurationData</code> and <code>ConfigurationChange</code> tables stay empty even hours after enablement. The Change Tracking extension appears provisioned with no obvious errors.</p>
  <p id="X0C7">On the VM itself, the Azure Monitor Agent (AMA) logs repeat the following errors every 4-5 minutes:</p>
  <pre id="H7hF">Failed to find an output stream for: &quot;&quot;
Error sending kusto telemetry data through output handler.
socket/pipe Error while sending request data
Error while reading settings  dial unix @CAgentStream_CloudAgentInfo_config_default_fluent.socket: connect: connection refused</pre>
  <hr />
  <h2 id="root-cause">Root Cause</h2>
  <p id="lpUx">The <strong>Azure Monitor Agent cannot authenticate to Azure</strong> because the VM has no managed identity assigned.</p>
  <p id="Ht04">AMA uses the VM&#x27;s managed identity to get a token from the Azure Instance Metadata Service (IMDS) endpoint (<code>http://169.254.169.254</code>). This token is required to:</p>
  <ul id="zZS6">
    <li id="ZEPN">Download the Data Collection Rule (DCR) configuration from Azure Monitor Configuration Service (AMCS)</li>
    <li id="Jmfn">Send collected data to the Log Analytics workspace</li>
  </ul>
  <p id="fzhQ">Without a token, AMA starts but fails to initialize its data pipeline. The Unix socket that the ChangeTracking extension communicates through (<code>@CAgentStream_CloudAgentInfo_config_default_fluent.socket</code>) is never created, because it only exists once the pipeline is up. The ChangeTracking extension then loops retrying the socket connection indefinitely.</p>
  <p id="AYqZ">The error in <code>/var/opt/microsoft/azuremonitoragent/log/mdsd.err</code> confirms this:</p>
  <pre id="8Ole">Failed to get MSI token from IMDS endpoint: http://169.254.169.254 ErrorCode:-2146041343</pre>
  <hr />
  <h2 id="diagnosis-steps">Diagnosis Steps</h2>
  <h3 id="1-verify-ama-is-running-but-has-no-socket">1. Verify AMA is running but has no socket</h3>
  <pre id="rIKw">systemctl is-active azuremonitoragent

# Check if the socket exists
ss -xlp | grep fluent</pre>
  <p id="7D6q">If AMA is <strong>active</strong> but the socket is <strong>missing</strong>, proceed to step 2.</p>
  <h3 id="2-check-ama-error-logs">2. Check AMA error logs</h3>
  <pre id="1SKy">sudo tail -50 /var/opt/microsoft/azuremonitoragent/log/mdsd.err</pre>
  <p id="nQbu">Look for <code>Failed to get MSI token from IMDS endpoint</code>.</p>
  <h3 id="3-confirm-the-vm-has-no-managed-identity">3. Confirm the VM has no managed identity</h3>
  <p id="K2aA">In the <strong>Azure portal</strong>: navigate to the VM → <strong>Security</strong> → <strong>Identity</strong>.</p>
  <ul id="CNeu">
    <li id="y5YC"><strong>System assigned</strong> tab: Status should be <strong>On</strong></li>
    <li id="Yqig"><strong>User assigned</strong> tab: should list at least one identity</li>
  </ul>
  <p id="Yrcr">If both are empty/off, the managed identity is missing.</p>
  <h3 id="4-confirm-imds-is-reachable-but-returns-no-identity">4. Confirm IMDS is reachable but returns no identity</h3>
  <p id="JyI8">Run this from inside the VM:</p>
  <pre id="s9VU">curl -s -H &quot;Metadata: true&quot; \
  &quot;http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&amp;resource=https://monitor.azure.com/&quot;</pre>
  <p id="6MT6">A VM with no managed identity returns an error response rather than an access token.</p>
  <h3 id="5-verify-dcr-config-was-never-downloaded">5. Verify DCR config was never downloaded</h3>
  <pre id="DOjJ">ls /etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/</pre>
  <p id="sKHc">The directory will be <strong>empty</strong>, confirming AMA never reached the configuration service.</p>
  <hr />
  <h2 id="fix">Fix</h2>
  <p id="jjLh">Enable system-assigned managed identity on the VM:</p>
  <p id="KQZZ"><strong>Azure portal</strong>: VM → <strong>Security</strong> → <strong>Identity</strong> → <strong>System assigned</strong> → set Status to <strong>On</strong> → <strong>Save</strong></p>
  <p id="SW59"><strong>Azure CLI</strong>:</p>
  <pre id="a0XU">az vm identity assign -g &lt;resource-group&gt; -n &lt;vm-name&gt;</pre>
  <p id="IYWC">After saving, restart AMA:</p>
  <pre id="2Kk3">sudo systemctl restart azuremonitoragent</pre>
  <p id="j4tY">Within 2-3 minutes, verify the socket is created:</p>
  <pre id="9iNb">ss -xlp | grep fluent</pre>
  <p id="qgGS">And confirm AMA is downloading its configuration:</p>
  <pre id="3esu">ls /etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/</pre>
  <hr />
  <h2 id="health-check-after-fix">Health Check After Fix</h2>
  <ol id="epuA">
    <li id="XPmo">Check AMA service is running: <code>systemctl status azuremonitoragent</code></li>
    <li id="Rn4k">Confirm the socket exists: <code>ss -xlp | grep fluent</code></li>
    <li id="xxf5">Verify DCR config downloaded: <code>ls /etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/</code></li>
    <li id="HUYv">No new errors in: <code>/var/opt/microsoft/azuremonitoragent/log/mdsd.err</code></li>
  </ol>
  <hr />
  <h2 id="references">References</h2>
  <ul id="c5XX">
    <li id="DYlb"><a href="https://learn.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm" target="_blank">Azure Monitor Agent troubleshooting — Linux</a></li>
    <li id="Ujnl"><a href="https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/how-to-configure-managed-identities" target="_blank">Configure managed identities on Azure VMs</a></li>
    <li id="BAGN"><a href="https://learn.microsoft.com/azure/azure-change-tracking-inventory/quickstart-monitor-changes-collect-inventory-azure-change-tracking-inventory" target="_blank">Enable Change Tracking and Inventory for Azure VMs</a></li>
  </ul>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/azure-ai-102-exam-preparation</guid><link>https://amirkhonov.com/azure-ai-102-exam-preparation?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/azure-ai-102-exam-preparation?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>How I Passed the Microsoft Azure AI-102 Exam</title><pubDate>Mon, 10 Nov 2025 07:18:04 GMT</pubDate><category>Azure</category><description><![CDATA[<img src="https://img1.teletype.in/files/49/1a/491ac14b-9b45-4b5b-853d-cb8e99ea3485.png"></img>Hey folks! As a DevOps engineer with a passion for cloud automation and AI integration, I've been diving deeper into Azure's ecosystem. Recently, on November 10, 2025, I sat for and passed the AI-102: Designing and Implementing a Microsoft Azure AI Solution exam. It was a challenging but rewarding experience that bridged my DevOps skills with AI engineering. If you're a fellow engineer eyeing this certification, this post is for you—I'll share my preparation strategy, resources, and tips to help you ace it.]]></description><content:encoded><![CDATA[
  <p id="qLI7">Hey folks! As a DevOps engineer with a passion for cloud automation and AI integration, I&#x27;ve been diving deeper into Azure&#x27;s ecosystem. Recently, on November 10, 2025, I sat for and <strong>passed the AI-102: Designing and Implementing a Microsoft Azure AI Solution exam</strong>. It was a challenging but rewarding experience that bridged my DevOps skills with AI engineering. If you&#x27;re a fellow engineer eyeing this certification, this post is for you—I&#x27;ll share my preparation strategy, resources, and tips to help you ace it.</p>
  <h2 id="qSh1">Why I Pursued AI-102</h2>
  <p id="nnpF">In my day-to-day role, I automate pipelines, manage infrastructure as code, and integrate services across Azure. AI is becoming integral to modern apps—think chatbots in CI/CD tools, image analysis in deployment monitoring, or natural language processing for log analytics. The AI-102 cert focuses on building and deploying Azure AI solutions, which aligns perfectly with DevOps principles like CI/CD integration, monitoring, and security. Plus, it&#x27;s a great way to level up in the Azure AI Engineer Associate path. If you have a background in Python/C#, REST APIs, or Azure fundamentals (like AZ-900), this is a natural next step.</p>
  <h2 id="fdXx">Step 1: Exploring the Official Study Guide</h2>
  <p id="FmCy">The foundation of my prep was the official Microsoft study guide at <a href="https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/ai-102" target="_blank">https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/ai-102</a>. It&#x27;s updated as of April 30, 2025, with no major changes since then (I double-checked for any November updates—none yet). This guide outlines what to expect: a 120-minute exam with multiple-choice, drag-and-drop, and case studies, needing a 700+ score to pass.</p>
  <p id="bjzn">Key highlights:</p>
  <ul id="6LZN">
    <li id="CB4E"><strong>Audience Profile</strong>: For engineers who design, build, deploy, and manage AI solutions using Azure AI services like Vision, Language, Speech, Search, and OpenAI.</li>
    <li id="4H6g"><strong>Skills Measured</strong> (20-25% on planning/managing; 15-20% on generative AI; 5-10% on agentic solutions; 10-15% on computer vision; 15-20% on NLP; 15-20% on knowledge mining).</li>
    <li id="gKod"><strong>Changes</strong>: Recent updates emphasize Azure AI Foundry, responsible AI, and new sections on agentic solutions and generative AI optimization.</li>
  </ul>
  <p id="tzSP"><strong>Pro Tip: </strong>Start here to map out gaps in your knowledge. The guide links to free practice assessments and the exam sandbox—use them early!</p>
  <h2 id="u5WV">Step 2: Creating a Learning Plan and Managing Time</h2>
  <p id="ziKs">Consistency is key in DevOps, and the same applies to cert prep. I planned for 2-3 months (about 8-10 weeks), studying 1-2 hours daily and 4-6 hours on weekends. Here&#x27;s my structured plan, aligned with the exam domains:</p>
  <ol id="lIsM">
    <li id="MK1b"><strong>Week 1-2: Plan and Manage Azure AI Solutions (20-25%)</strong> Focus: Service selection, responsible AI principles, resource creation (via Portal/CLI/ARM), CI/CD integration, monitoring (Azure Monitor, costs), security (Key Vault, Entra ID). Time: 10-15 hours. Hands-on: Set up an Azure AI resource and deploy a simple container.</li>
    <li id="wIfX"><strong>Week 3-4: Generative AI and Agentic Solutions (20-30% combined)</strong> Focus: Building with Azure AI Foundry/OpenAI, prompts, RAG patterns, agents (Semantic Kernel, Autogen), optimization (parameters, fine-tuning). Time: 15-20 hours. Hands-on: Deploy GPT models, create custom agents.</li>
    <li id="InZd"><strong>Week 5: Computer Vision Solutions (10-15%)</strong> Focus: Image analysis (OCR, object detection), custom models, video insights (Video Indexer, Spatial Analysis). Time: 8-10 hours. Hands-on: Train a Custom Vision model and analyze videos.</li>
    <li id="KrHg"><strong>Week 6: Natural Language Processing Solutions (15-20%)</strong> Focus: Text/speech analysis, translation, custom models (CLU, QnA), intent recognition. Time: 10-12 hours. Hands-on: Build a speech-to-text app and a multi-language QnA bot.</li>
    <li id="V5DZ"><strong>Week 7: Knowledge Mining and Information Extraction (15-20%)</strong> Focus: Azure AI Search (indexes, skillsets), Document Intelligence (prebuilt/custom models), content understanding. Time: 10-12 hours. Hands-on: Create a search index with custom skills and extract data from docs.</li>
    <li id="FgF6"><strong>Week 8-10: Review, Practice, and Mock Exams</strong> Time: 20+ hours. Revisit weak areas, take the free practice assessment on Microsoft Learn, and simulate the exam environment.</li>
  </ol>
  <p id="JTYm">Tools for time management: I used my custom Obsidian note for tracking progress, set daily Pomodoro sessions (25-min focus bursts), and blocked calendar time. Adjust based on your schedule— if you&#x27;re full-time working, aim for evenings/weekends to avoid burnout.</p>
  <h2 id="06ea">Step 3: Key Resources I Used</h2>
  <p id="IOKR">I curated and relied on a mix of official and community resources. Here&#x27;s what worked:</p>
  <ul id="h4d9">
    <li id="yaLA"><strong>My Own Repository</strong>: I built <a href="https://github.com/amirkhonov/microsoft-ai-102-exam-study-guide" target="_blank">https://github.com/amirkhonov/microsoft-ai-102-exam-study-guide</a> based on the skills measured. It organizes everything by domain with links to Microsoft Learn modules, official docs, quickstarts, and hands-on labs. Feel free to fork and contribute!</li>
    <li id="qavy"><strong>Mindmap for Visual Learners</strong>: The repo at <a href="https://github.com/lrivallain/ai-102-mindmap" target="_blank">https://github.com/lrivallain/ai-102-mindmap</a> is gold. It&#x27;s a markmap-based outline covering all skills hierarchically—great for quick reviews. It details services like Azure AI Vision, Speech, Search, and OpenAI, with practical steps (e.g., CLI commands, API examples). This repository can include the outdated/retired services, double-check everything here.</li>
    <li id="DFPN"><strong>Other Notes and Repos</strong>: Check out <a href="https://github.com/vatsprat/AI-102-AI-Engineer-Associate-Certification-Exam-" target="_blank">https://github.com/vatsprat/AI-102-AI-Engineer-Associate-Certification-Exam-</a> for personal notes on clearing the exam. While it&#x27;s lightweight, it inspired me to jot down my own summaries.</li>
    <li id="VnaH"><strong>Study Resources from the Guide</strong>: Azure docs for each service (e.g., OpenAI, Vision), Microsoft Q&amp;A for doubts, and Tech Community forums.</li>
  </ul>
  <h2 id="IKNt">Step 4: Hands-On Practice with Azure Subscription</h2>
  <p id="sJCg">Theory alone won&#x27;t cut it—AI-102 is heavy on implementation. I used my Azure subscription for labs. If you&#x27;re new, start with the free trial (azure.microsoft.com/free) which gives $200 credit for 30 days. Opt for free tiers where possible (e.g., Azure AI Search free tier, OpenAI playground).</p>
  <p id="HcMe">Tips to Control Costs:</p>
  <ul id="jyR7">
    <li id="fW8V">Monitor usage in Azure Cost Management—set budgets and alerts.</li>
    <li id="YYyb">Delete resources after labs (use Azure CLI: az group delete).</li>
    <li id="fkfU">Stick to low-quota deployments (e.g., standard SKU for testing).</li>
    <li id="ttVj">I spent about $50 over 2 months, mostly on OpenAI tokens and Vision processing—worth it for real-world skills.</li>
  </ul>
  <p id="HjQU">Hands-on examples: Deployed a RAG-based app with Azure AI Foundry, trained custom vision models, and integrated Speech SDK in a Python script. This made concepts like prompt engineering and skillsets stick.</p>
  <h2 id="MAmn">Exam Day Experience and Tips</h2>
  <p id="IXoR">The exam had ~58 questions, including case studies on end-to-end solutions. It tested practical scenarios: choosing services, securing resources, optimizing generative AI.</p>
  <p id="hx6K">Tips:</p>
  <ul id="Xd7i">
    <li id="hcyd"><strong>Focus on Changes</strong>: Know the April 2025 updates—e.g., agentic solutions, responsible AI governance.</li>
    <li id="wTcY"><strong>Practice Responsibly</strong>: Questions on content safety, prompt shields, and ethical AI are common.</li>
    <li id="RE8M"><strong>Time Management</strong>: Skip tough ones first; use the extra 30 mins if English isn&#x27;t your first language.</li>
    <li id="Tma7"><strong>Utilize Microsoft Learn During the Exam: </strong>Since Microsoft Docs and Learn are accessible during the exam (for non-Fundamentals certs like AI-102), familiarize yourself with their structure to quickly search for necessary information on services, APIs, and configurations.</li>
  </ul>
  <ul id="fYje">
    <li id="bKao"><strong>Don&#x27;t Cram</strong>: Review mindmaps the day before, get good sleep.</li>
  </ul>
  <ul id="HG3Z">
    <li id="3aIv"><strong>Post-Exam</strong>: Renew annually via free assessments on Learn.</li>
  </ul>
  <h2 id="Ix1K">Final Thoughts</h2>
  <p id="rbdc">Passing AI-102 has already paid off—I&#x27;m now integrating AI into my DevOps pipelines more confidently. If you&#x27;re prepping, start with the official guide, build a plan, and get hands-on. Check my repo for structured resources.</p>
  <p id="kStS">What cert are you chasing next? Let me know in the comments.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/5-bash-tricks</guid><link>https://amirkhonov.com/5-bash-tricks?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/5-bash-tricks?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>5 Bash Tricks That Will Make You a Better SRE</title><pubDate>Wed, 22 Oct 2025 06:37:34 GMT</pubDate><category>Bash</category><description><![CDATA[When an incident hits production, time slows down — and every second counts. Dashboards start flashing red, CPU usage spikes across clusters, logs grow by the megabyte, and alerts flood your Telegram channel.]]></description><content:encoded><![CDATA[
  <p id="62s5">When an incident hits production, time slows down — and every second counts. Dashboards start flashing red, CPU usage spikes across clusters, logs grow by the megabyte, and alerts flood your Telegram channel.  </p>
  <p id="olw7">In that chaos, your terminal becomes your battlefield — and <strong>how you use Bash </strong>determines whether you’re firefighting or fixing.  <br /><br />Most engineers know basic Bash commands, but great SREs know how to bend the shell to their will. Here are <strong>five Bash tricks</strong> that separate seasoned SREs from everyone else — the kind that turn a 30-minute debug session into a five-minute win.  </p>
  <h2 id="FuIc">1. Process Substitution — Compare Anything, Anywhere, Instantly</h2>
  <p id="Eoyu">Think beyond pipes. <strong>Process substitution</strong> (<code>&lt;(...)</code>) lets you treat live command outputs as files — perfect for real-time comparisons.  </p>
  <pre id="Mpd2" data-lang="bash"># Compare logs from two servers
diff &lt;(ssh server1 &quot;tail -f /var/log/app.log&quot;) &lt;(ssh server2 &quot;tail -f /var/log/app.log&quot;)

# Monitor two pods side by side
paste &lt;(kubectl logs -f pod1) &lt;(kubectl logs -f pod2) | column -t

# Compare configs across environments
diff &lt;(curl -s https://prod.api.com/config) &lt;(curl -s https://staging.api.com/config)</pre>
  <p id="Mpd2">This feature feels like magic: no temp files, no manual juggling — just instant, live insight. </p>
  <h2 id="vCg3">2. Command History — Your Incident Time Machine</h2>
  <p id="Pxch">When things break, there’s no time for retyping long commands. Bash history expansion gives you superhuman speed:</p>
  <pre id="4ghH" data-lang="bash">!kubectl       # Repeat the last kubectl command
^tpyo^typo     # Fix a typo from the previous command
sudo !!        # Rerun the last command with sudo
cp /var/log/app.log !$   # Use the last argument again</pre>
  <p id="4ghH"><strong>Pro tip:  <br /></strong>Add this to your &#x60;.bashrc&#x60; to preview what history expansion will run:  </p>
  <pre id="maiC" data-lang="bash">bind Space:magic-space</pre>
  <p id="maiC">Now hitting the spacebar after <code>!kubectl</code> shows exactly what Bash will execute — a lifesaver during high-stress debugging.</p>
  <h2 id="704V">3. Brace Expansion — Batch Operations Without Scripts  </h2>
  <p id="IMA0">Brace expansion <code>{}</code> is like having loops baked right into Bash syntax.</p>
  <pre id="KQ5b" data-lang="bash"># Create backups
mkdir backup-{db,logs,config}-$(date +%Y%m%d)

# Check health for multiple environments
for env in {prod,staging,dev}; do
  echo &quot;=== $env ===&quot;
  curl -s https://$env.api.com/health | jq &#x27;.status&#x27;
done

# Restart multiple hosts
for host in web-{01..05}; do
  ssh $host &quot;sudo systemctl restart nginx&quot;
done</pre>
  <p id="KQ5b">With this, you automate repetitive operations in one line — no need for temporary scripts or overcomplicated loops.</p>
  <h2 id="FSyY">4. Command Substitution With Error Handling — Smart and Safe Automation</h2>
  <p id="5579">Command substitution <code>$(...)</code> lets your scripts react to live data. Pair it with simple error handling, and you get safe, self-correcting automation:</p>
  <pre id="NYW8" data-lang="bash">if pod_name=$(kubectl get pods -l app=critical --no-headers 2&gt;/dev/null); then
  echo &quot;Found pod: $pod_name&quot;
  kubectl describe pod $pod_name
else
  echo &quot;❌ No pods found - checking deployments&quot;
  kubectl get deployments -l app=critical
fi</pre>
  <p id="NYW8">Why it matters: instead of hardcoding names and assumptions, your commands adapt to the current system state — a must for dynamic, ever-changing environments.</p>
  <h2 id="uttV">5. Parameter Expansion — Config Without Config Managers</h2>
  <p id="8h36">This is Bash’s built-in templating system — perfect for defaults, fallbacks, and string manipulation.</p>
  <pre id="gGoY" data-lang="bash">DATABASE_URL=${DATABASE_URL:-postgresql://localhost:5432/app}
LOG_LEVEL=${LOG_LEVEL:-INFO}
TIMEOUT=${TIMEOUT:-30}</pre>
  <p id="gGoY">Or extract and manipulate values:</p>
  <pre id="pwar" data-lang="bash">config=&quot;/etc/myapp/prod/database.conf&quot;
env=${config%/*}; env=${env##*/}   # -&gt; &quot;prod&quot;

db_url=&quot;postgresql://user:secret@db.example.com:5432/app&quot;
safe_url=${db_url//:*@/:xxx@}      # -&gt; hides the password</pre>
  <p id="pwar">With this, you can handle complex configuration logic right inside Bash — no Python, no jq, no sed needed.</p>
  <h2 id="9elZ">Bonus: Instant System Dashboard</h2>
  <p id="SLjw">Need a quick status view during an outage? One command:</p>
  <pre id="2U7P" data-lang="bash">watch -n1 &#x27;
echo &quot;=== CPU/MEMORY ===&quot;
top -bn1 | head -5
echo -e &quot;\n=== DISK USAGE ===&quot;
df -h | grep -v tmpfs
echo -e &quot;\n=== NETWORK ===&quot;
ssh -tuln | grep LISTEN | head -5
echo -e &quot;\n=== RECENT ERRORS ===&quot;
tail -n3 /var/log/syslog | grep -i error
&#x27;</pre>
  <p id="2U7P">This gives you a live, auto-updating dashboard — CPU, memory, disk, network, and errors — refreshed every second.</p>
  <h2 id="MZtK">Why These Tricks Matter</h2>
  <p id="Qw0k">SREs operate under pressure, often on unfamiliar systems. These Bash habits give you:  <br /><br />- <strong>Speed</strong> — fewer keystrokes, faster recovery.  <br />- <strong>Reliability</strong> — fewer manual errors.  <br />- <strong>Portability</strong> — works everywhere Bash does.  <br />- <strong>Scalability</strong> — handle multiple servers and services simultaneously.</p>
  <h2 id="nDFr">Final Thoughts</h2>
  <p id="KWqL">These aren’t just “cool hacks” — they’re battle-tested techniques that define a senior SRE’s command-line fluency. Master them, and you’ll turn Bash from a basic tool into an extension of your engineering intuition. Next time your dashboard lights up like a Christmas tree — you’ll be ready.<br /></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/troubleshoot-azure-vm-change-tracking-agent</guid><link>https://amirkhonov.com/troubleshoot-azure-vm-change-tracking-agent?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/troubleshoot-azure-vm-change-tracking-agent?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>When Azure Change Tracking Goes Silent: My Debugging Journey</title><pubDate>Mon, 06 Oct 2025 14:35:38 GMT</pubDate><description><![CDATA[After enabling the Change Tracking and Inventory agent on one of my Windows VMs, I expected to see a nice stream of inventory data showing up in the Azure portal. Instead, I was greeted with… nothing. No inventory data, no configuration updates—just a quiet portal and a suspicious feeling that something had gone wrong.]]></description><content:encoded><![CDATA[
  <p id="GQmC">After enabling the <em>Change Tracking and Inventory</em> agent on one of my Windows VMs, I expected to see a nice stream of inventory data showing up in the Azure portal. Instead, I was greeted with… nothing. No inventory data, no configuration updates—just a quiet portal and a suspicious feeling that something had gone wrong.</p>
  <p id="AGhb">Digging into the agent logs confirmed my suspicion. Here’s what I saw:</p>
  <pre id="6h6U">time=&quot;2025-09-23T15:23:03Z&quot; level=info msg=&quot;Agent Process got the configFolder C:\\Packages\\Plugins\\Microsoft.Azure.ChangeTrackingAndInventory.ChangeTracking-Windows\\2.35.0.0. \n&quot;
time=&quot;2025-09-23T15:23:03Z&quot; level=error msg=&quot;socket/pipe Error while sending request data&quot;
time=&quot;2025-09-23T15:23:03Z&quot; level=error msg=&quot;Error while reading settings  open \\\\.\\\\pipe\\\\CAgentStream_CloudAgentInfo_AzureMonitorAgent: The system cannot find the file specified.&quot;
time=&quot;2025-09-23T15:24:03Z&quot; level=error msg=&quot;Failed to find an output stream for CONFIG_CHANGE_BLOB&quot;
time=&quot;2025-09-23T15:24:03Z&quot; level=error msg=&quot;Error sending kusto telemetry data through output handler.&quot;</pre>
  <p id="VkcZ">That “pipe not found” error immediately stood out. It looked like the Change Tracking agent was trying to send data to the Azure Monitor Agent (AMA), but couldn’t reach it. Essentially, the Change Tracking agent depends on the AMA to send telemetry to Azure—but if that communication channel fails, all the data just… stalls.</p>
  <p id="qjEY">To understand what was going on, I stumbled upon a great deep dive by Lucas Lifes: <a href="https://blog.lucaslifes.com/p/deep-dig-into-windows-change-tracking-and-inventory-with-azure-montior-agent/" target="_blank">Deep Dig into Windows Change Tracking and Inventory with Azure Monitor Agent</a>. It explains beautifully how the Change Tracking extension feeds its configuration and telemetry through the AMA. This gave me a clue: maybe the problem wasn’t with Change Tracking itself, but with the Monitor Agent underneath it.</p>
  <p id="Huwr">Sure enough, when I checked the AMA logs under <br /><code>C:\WindowsAzure\Resources\AMADataStore.test-vm0\Configuration</code>,<br /> I found this gem:</p>
  <pre id="IMUA">Info (2025-10-06T10:01:13Z): MonAgentManager.exe - Non-success status code from IMDS for MSI token with default identity. 
URI [/metadata/identity/oauth2/token?api-version=2018-02-01&amp;resource=https://monitor.azure.com/], 
Status code=400, response [{&quot;error&quot;:&quot;invalid_request&quot;,&quot;error_description&quot;:&quot;Identity not found&quot;}]</pre>
  <p id="eaR6">There it was. The AMA was failing to get an MSI (Managed Service Identity) token from the Azure Instance Metadata Service (IMDS). Without a valid identity, it couldn’t authenticate to Azure Monitor, and without authentication, it couldn’t send data.</p>
  <p id="nTaQ">In plain terms:</p>
  <ul id="coZv">
    <li id="9Wx8">Change Tracking was trying to push its telemetry.</li>
    <li id="qnpy">AMA was supposed to handle that transmission.</li>
    <li id="ZMS9">AMA couldn’t talk to Azure because it had no identity.</li>
    <li id="CAXA">Therefore, no data appeared in the portal.</li>
  </ul>
  <p id="hgNN">The fix came from another excellent write-up by <a href="https://ramanareddy-v.medium.com/troubleshooting-azure-monitor-agent-ama-on-windows-virtual-machines-a4d96d718cc9" target="_blank">Ramana Reddy</a>. The solution was to <strong>ensure the VM has a system-assigned or user-assigned managed identity enabled</strong>, and that the Azure Monitor Agent is allowed to use it. Once that was configured properly, the errors vanished, and the inventory data started flowing in as expected.</p>
  <p id="qEBr">It’s a small but classic example of how Azure agents rely on a daisy chain of dependencies—one missing link, and the whole system quietly fails. The key takeaway?<br /> When your Change Tracking data doesn’t show up, don’t just look at the extension. Follow the pipes—literally. The problem might be sitting one layer below, in your monitor agent’s authentication chain.</p>
  <p id="Q8ro">Next time you enable Change Tracking and Inventory, double-check that your VM has a valid managed identity and that Azure Monitor Agent can use it. It’ll save you a few hours of log spelunking.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/entra-id-oauth2-token-issuer</guid><link>https://amirkhonov.com/entra-id-oauth2-token-issuer?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/entra-id-oauth2-token-issuer?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>Azure AD Token Issuer Mismatch: Resolving sts.windows.net vs login.microsoftonline.com</title><pubDate>Wed, 17 Sep 2025 12:06:17 GMT</pubDate><description><![CDATA[Working with Azure Active Directory authentication can sometimes present unexpected challenges. One such issue that frequently surfaces in development teams involves receiving access tokens with an unexpected issuer claim. Specifically, applications may receive tokens issued by sts.windows.net when the expected issuer should be login.microsoftonline.com.]]></description><content:encoded><![CDATA[
  <p id="EDAV">Working with Azure Active Directory authentication can sometimes present unexpected challenges. One such issue that frequently surfaces in development teams involves receiving access tokens with an unexpected issuer claim. Specifically, applications may receive tokens issued by <code>sts.windows.net</code> when the expected issuer should be <code>login.microsoftonline.com</code>.</p>
  <p id="FD7A">This discrepancy often leads to authentication failures and can be particularly frustrating when working with third-party integrations or modern authentication frameworks that expect specific token formats.</p>
  <h2 id="problem-analysis">Problem Analysis</h2>
  <p id="SuwD">The issue manifests when examining the <code>iss</code> (issuer) claim within access tokens. Instead of the anticipated modern issuer format, developers encounter:</p>
  <ul id="9Fuf">
    <li id="Auhc"><strong>Received</strong>: <code>https://sts.windows.net/{tenant-id}/</code></li>
    <li id="Kmxl"><strong>Expected</strong>: <code>https://login.microsoftonline.com/{tenant-id}/v2.0</code></li>
  </ul>
  <p id="QCVj">This difference stems from Azure AD&#x27;s dual token versioning system, where applications can receive tokens in either v1.0 or v2.0 format depending on their configuration.</p>
  <h2 id="token-format-differences">Token Format Differences</h2>
  <p id="mmgi">Azure AD maintains two distinct token formats to ensure backward compatibility while supporting modern authentication requirements:</p>
  <p id="ZL4I"><strong>Version 1.0 Format</strong></p>
  <ul id="NIVE">
    <li id="KN7x">Uses <code>sts.windows.net</code> as the issuer</li>
    <li id="iwxP">Primarily designed for work and school accounts</li>
    <li id="S6Su">Represents the legacy token structure</li>
    <li id="gBcO">Automatically assigned to applications without explicit version specification</li>
  </ul>
  <p id="KrV1"><strong>Version 2.0 Format</strong></p>
  <ul id="4qqc">
    <li id="F3DM">Uses <code>login.microsoftonline.com</code> as the issuer</li>
    <li id="0la9">Supports both personal and organizational accounts</li>
    <li id="ePxN">Provides enhanced OpenID Connect compatibility</li>
    <li id="sYuN">Offers improved claim structure and additional features</li>
  </ul>
  <h2 id="implementation-solution">Implementation Solution</h2>
  <p id="Boer">The resolution requires modifying the application registration&#x27;s manifest to explicitly request v2.0 tokens. This involves updating the <code>accessTokenAcceptedVersion</code> property.</p>
  <h3 id="configuration-steps">Configuration Steps</h3>
  <p id="xNmo"><strong>1. Access Application Registration</strong> Navigate to the Azure Portal and locate the application registration under Azure Active Directory &gt; App registrations.</p>
  <p id="ywCa"><strong>2. Open Application Manifest</strong> Select the target application and access the &quot;Manifest&quot; section from the left navigation panel.</p>
  <p id="9kAM"><strong>3. Modify Token Version</strong> Locate the <code>accessTokenAcceptedVersion</code> property and update its value:</p>
  <pre id="imwY">&quot;accessTokenAcceptedVersion&quot;: 2</pre>
  <p id="M2ec"><strong>4. Apply Changes</strong> Save the manifest and allow several minutes for the changes to propagate across Azure AD infrastructure.</p>
  <h2 id="technical-implications">Technical Implications</h2>
  <p id="qaq8">The token format has several downstream effects on application behavior and integration patterns:</p>
  <p id="49hi"><strong>Validation Logic</strong>: Applications performing issuer validation must account for the correct expected format. Hardcoded validation against a specific issuer will fail if the token format differs from expectations.</p>
  <p id="oire"><strong>Library Compatibility</strong>: Modern authentication libraries, particularly those implementing OpenID Connect, often expect v2.0 token formats. Using v1.0 tokens may result in compatibility issues or reduced functionality.</p>
  <p id="oBTW"><strong>Claims Structure</strong>: While both versions contain similar core claims, the structure and availability of certain claims may differ between versions.</p>
  <h2 id="migration-considerations">Migration Considerations</h2>
  <p id="I6Q5">Organizations planning to update existing applications should evaluate several factors:</p>
  <p id="wgPr"><strong>Testing Requirements</strong>: Comprehensive testing across all authentication flows ensures that the token format change doesn&#x27;t introduce regressions.</p>
  <p id="2huB"><strong>Dependent Systems</strong>: Any downstream services consuming these tokens must be evaluated for compatibility with the new issuer format.</p>
  <p id="No3J"><strong>Rollback Planning</strong>: Understanding how to revert changes quickly is essential for production environments.</p>
  <h2 id="best-practices">Best Practices</h2>
  <p id="INQc"><strong>New Application Development</strong> When creating new Azure AD integrations, explicitly set <code>accessTokenAcceptedVersion</code> to 2 during the initial application registration process.</p>
  <p id="VarE"><strong>Authentication Library Selection</strong> Utilize Microsoft Authentication Library (MSAL) which naturally aligns with v2.0 endpoints and token formats.</p>
  <p id="BMXh"><strong>Environment Management</strong> Test token format changes in development and staging environments before implementing in production systems.</p>
  <h2 id="alternative-approaches">Alternative Approaches</h2>
  <p id="723n">In scenarios where modifying the application manifest isn&#x27;t feasible, consider these alternatives:</p>
  <ul id="Jmop">
    <li id="s84u">Update token validation logic to accept multiple issuer formats</li>
    <li id="oFCs">Implement conditional validation based on token version detection</li>
    <li id="qnDL">Create a new application registration with v2.0 configuration</li>
  </ul>
  <h2 id="conclusion">Conclusion</h2>
  <p id="35qO">The token issuer mismatch between <code>sts.windows.net</code> and <code>login.microsoftonline.com</code> represents a common configuration issue rather than a system malfunction. Understanding Azure AD&#x27;s token versioning system and appropriately configuring the <code>accessTokenAcceptedVersion</code> property resolves this issue while positioning applications for better compatibility with modern authentication standards.</p>
  <p id="DZSf">The transition from v1.0 to v2.0 tokens not only addresses immediate issuer validation concerns but also provides access to enhanced authentication features and improved integration capabilities with contemporary identity management systems.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/0xtools-overview</guid><link>https://amirkhonov.com/0xtools-overview?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/0xtools-overview?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>Unveiling 0x.tools</title><pubDate>Mon, 15 Sep 2025 10:28:39 GMT</pubDate><description><![CDATA[0x.tools is a suite of open-source utilities by Tanel Poder, designed to provide deep insights into how applications behave under Linux. The key goals are:]]></description><content:encoded><![CDATA[
  <p id="sTQ4">0x.tools is a suite of open-source utilities by Tanel Poder, designed to provide deep insights into how applications behave under Linux. The key goals are:</p>
  <ul id="7Oa6">
    <li id="k4xh"><strong>Low friction</strong>: minimal dependencies, no kernel modules, no heavy monitoring infrastructure.</li>
    <li id="qfRJ"><strong>Thread-level visibility</strong>: ability to see what each thread is doing — whether it is running, sleeping, waiting on I/O, in kernel, etc.</li>
    <li id="q3Jo"><strong>Always on, or close to it</strong>: tools for continuous sampling to catch intermittent or rare issues.</li>
  </ul>
  <p id="RuYc">By combining sampling of <code>/proc</code> (for legacy and wide support) and newer eBPF-based functionality, 0x.tools bridges the gap between “traditional Linux tools” (top, ps, etc.) and more advanced observability setups.</p>
  <h2 id="YZHl">Key Components &amp; Tools</h2>
  <h3 id="iVvz">🔹 xcapture (v1, v2, v3-alpha)</h3>
  <p id="edRr">The heart of 0x.tools. It continuously samples threads, capturing their state (running, waiting, sleeping), the current syscall, wait channels, and even stack traces. By storing this data in hourly CSVs, you can “rewind time” during troubleshooting. Perfect for diagnosing elusive issues like lock contention or I/O stalls.</p>
  <hr />
  <h3 id="82SV">🔹 xtop</h3>
  <p id="YRTG">Think of it as a “supercharged top.” xtop gives a live, interactive view of processes and threads, but with more detail than <code>top</code> or <code>htop</code>. It shows wall-clock times, kernel events, and individual thread behavior — ideal when you need a real-time snapshot with depth.</p>
  <hr />
  <h3 id="N4XS">🔹 psn (Process Snapper)</h3>
  <p id="mtlc">A lightweight way to capture what threads are doing right now. It reveals which syscalls are active, what wait channels they’re in, and which threads are blocked. Useful for identifying immediate blockers in your system.</p>
  <hr />
  <h3 id="gbWJ">🔹 schedlat</h3>
  <p id="SsLB">This tool zooms in on scheduling latency — how long threads spend waiting before the CPU picks them up. It’s invaluable for spotting CPU starvation, scheduling bottlenecks, and workload imbalances.</p>
  <hr />
  <h3 id="mnn8">🔹 Supporting Utilities</h3>
  <p id="hqCK">Other tools like <code>lsds</code>, <code>syscallargs</code>, <code>tracepointargs</code>, and <code>xstack</code> add detail about block devices, syscall arguments, kernel tracepoints, and stack behavior. Together, they extend your visibility from surface symptoms into root causes.</p>
  <h2 id="gd8A">Why It Matters — Use Cases &amp; Trade-Offs</h2>
  <h3 id="g049">Use Cases</h3>
  <ul id="wNF1">
    <li id="RdGZ"><strong>Production issue investigation</strong>: When something bad happens occasionally (latency spike, system pause, IO stall), classic monitoring (CPU, memory, I/O metrics) might not show <em>why</em>. 0x.tools lets you sample <em>what threads were doing</em> at those moments.</li>
    <li id="O9St"><strong>Kernel vs Application boundary issues</strong>: Sometimes the delay is inside the kernel — e.g. lock contention, fsync, block device waits. 0x.tools highlights those.</li>
    <li id="Gmkr"><strong>Legacy or constrained environments</strong>: Environments where you can’t install kernel modules or change kernel version easily. Since many components use <code>/proc</code> sampling, it supports older systems.</li>
    <li id="zk7K"><strong>Continuous profiling strategy</strong>: By collecting lightweight samples over time, you build a historical view. When trouble hits, you can inspect preceding behavior.</li>
  </ul>
  <h3 id="vihj">Trade-Offs &amp; Considerations</h3>
  <ul id="D9wi">
    <li id="ImMs">Overhead is low but non-zero. Sampling, even every second, uses some CPU. But designed to be &lt;1%.</li>
    <li id="Bcmv">On systems with tens of thousands of active threads, even sampling /proc can become expensive; you might need to reduce sample frequency.</li>
    <li id="9JT5">While eBPF adds more power and richer detail, it may not be available or supported on all Linux kernels or in all operating environments (enterprises, older machines).</li>
    <li id="iG7y">The tooling is strong for diagnosing <em>what is happening</em>, but less about visualization / dashboards (though that is in the roadmap). Requires comfort with command line, parsing CSVs, etc.</li>
  </ul>
  <hr />
  <h2 id="EcHQ">How It Works — Architecture &amp; Methods</h2>
  <ul id="frkb">
    <li id="bRfU"><strong>Proc-based sampling</strong>: Many tools in 0x.tools simply read from <code>/proc</code> (which Linux exposes many kernel statistics via virtual files) at intervals. Thread state, syscalls, wait channels, etc.</li>
    <li id="ojYb"><strong>eBPF</strong>: Where supported, newer components (xcapture v3-alpha, etc.) leverage eBPF for more precise event instrumentation with less overhead. Enables off-CPU sampling, hooking into kernel tracepoints, etc.</li>
    <li id="A2gw"><strong>Historical archival</strong>: Samples can be written to hourly CSV archives, enabling “look back” after an issue. You can use standard text processing tools (awk, grep, etc.) or load into databases.</li>
  </ul>
  <h2 id="mVjp">Practical Advice for Using 0x.tools Well</h2>
  <ol id="EdRA">
    <li id="nXgd"><strong>Start small in production</strong><br /> Try sampling every few seconds or longer, maybe only on some hosts, to get a feel for overhead.</li>
    <li id="m33r"><strong>Correlate with external metrics</strong><br /> Use 0x.tools in conjunction with your usual monitoring stack (CPU, mem, IO, latency). When dashboards show anomalies, check 0x.tools archives to see thread behavior.</li>
    <li id="mTdf"><strong>Use historical data</strong><br /> The ability to capture continuous or regular samplings means you might capture the root cause even before you realize there’s an issue.</li>
    <li id="Wutp"><strong>Know your kernel/environment limitations</strong><br /> If eBPF not available, stick to proc-based tools. Some kernel versions limit certain tracepoints.</li>
    <li id="IY9z"><strong>Automate retention / cleanup</strong><br /> CSV archives can grow; set up scripts to compress, rotate, archive or drop old data.</li>
  </ol>
  <hr />
  <h2 id="ot6V">Why 0x.tools Fills an Important Niche</h2>
  <p id="jkYy">In many Linux performance stacks, there are gaps:</p>
  <ul id="SxRT">
    <li id="tOvd">Tools like <strong>prometheus</strong>, <strong>Grafana</strong>, <strong>CloudWatch</strong> etc. help aggregate metrics, show system usage over time. But they often don’t expose <em>why</em> a thread is waiting, or which syscall is slow.</li>
    <li id="geMt">Distributed tracing (Jaeger, Zipkin) shows request flows, but not the low-level wait, lock, or kernel layer behavior inside threads.</li>
    <li id="lpkT">Traditional tools (top, ps) are great, but either only see CPU usage or don’t show sleeping/waiting in detail, or require manual invocation.</li>
  </ul>
  <p id="YF4q">0x.tools sits in that gap: providing <strong>thread-level, kernel-aware, low overhead visibility</strong>, both live and historical.</p>
  <hr />
  <h2 id="oIwB">Conclusion</h2>
  <p id="DpCb">0x.tools is an exciting toolset for anyone who manages Linux servers and cares about performance on a deeper level. It offers:</p>
  <ul id="6BJo">
    <li id="MjCF">visibility into what threads are waiting on, sleeping, or doing, rather than just coarse CPU / memory usage,</li>
    <li id="LlpJ">the ability to catch intermittent or rare performance degradation,</li>
    <li id="aYb5">application in environments where heavier instrumentation is difficult or not allowed.</li>
  </ul>
  <p id="t4ha">For system administrators, site reliability engineers, performance engineers: 0x.tools can reduce the time to identify root causes by clarifying what is really going on inside your system when things seem “slow” but no obvious metrics are showing a problem.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/HDxFG_GhfJP</guid><link>https://amirkhonov.com/HDxFG_GhfJP?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/HDxFG_GhfJP?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>Fixing Colima Mount Errors on macOS M1/M2/M3 (The Quick &amp; Clean Way)</title><pubDate>Mon, 21 Apr 2025 04:22:50 GMT</pubDate><description><![CDATA[If you’ve been working with Docker on macOS using Colima, chances are you’ve run into frustrating mount errors — especially on Apple Silicon machines (M1, M2, M3). You’re not alone. The issue can pop up after OS upgrades, Colima version changes, or even just random reboots.]]></description><content:encoded><![CDATA[
  <p id="2p7b">If you’ve been working with Docker on macOS using <a href="https://github.com/abiosoft/colima" target="_blank">Colima</a>, chances are you’ve run into frustrating mount errors — especially on Apple Silicon machines (M1, M2, M3). You’re not alone. The issue can pop up after OS upgrades, Colima version changes, or even just random reboots.</p>
  <p id="GTBD">Here’s a no-fluff fix that’s been working reliably.</p>
  <h2 id="FfRQ">🧯 The Problem</h2>
  <p id="fdQL">You’re starting Colima and seeing something like this:</p>
  <pre id="sQDZ">ERROR: mount type &quot;reverse-sshfs&quot; failed: ...</pre>
  <p id="Nuom">Or your containers simply don’t see your shared volumes at all.</p>
  <p id="v0Vi">It’s usually related to how Colima sets up the <code>sshfs</code> mount — and the problem tends to show up more often on M1/M2/M3 machines due to tighter system permissions and architecture differences.</p>
  <h2 id="AZwx">✅ The Fix</h2>
  <p id="QL2V">Here’s the step-by-step fix I recommend:</p>
  <h3 id="WlIP">1. <strong>Stop Colima (if running)</strong></h3>
  <pre id="EYXX">colima stop</pre>
  <hr />
  <h3 id="Rzem">2. <strong>Reset Colima&#x27;s configuration</strong></h3>
  <pre id="IfiA">colima delete
</pre>
  <p id="vDKJ">This deletes the VM Colima uses but doesn’t touch your Docker images.</p>
  <hr />
  <h3 id="EMd3">3. <strong>Install or Reinstall macFUSE</strong></h3>
  <p id="Npjn">Colima depends on <code>macFUSE</code> for reverse mounts.</p>
  <ul id="hNID">
    <li id="xpm1">Download from: <a href="true">https://osxfuse.github.io</a></li>
    <li id="kRWg">Or install via Homebrew:</li>
  </ul>
  <pre id="j824">brew install --cask macfuse
</pre>
  <blockquote id="81Tq">⚠️ After installing, you <strong>must reboot</strong> your Mac. System Extensions require a reboot to become active.</blockquote>
  <hr />
  <h3 id="DcUL">4. <strong>Start Colima with the right mount type</strong></h3>
  <pre id="Dnv3">colima start --mount-type=sshfs</pre>
  <p id="rWUm">If you want to make it permanent:</p>
  <pre id="S7S1">colima start --mount-type=sshfs --edit</pre>
  <p id="ZjgT">Edit the config to keep <code>mountType: sshfs</code>.</p>
  <hr />
  <h3 id="IyCk">5. <strong>(Optional) Use <code>mount</code> flag for custom volumes</strong></h3>
  <p id="V11p">Example:</p>
  <pre id="UeuE">colima start --mount ~/projects:/projects:rw --mount-type=sshfs</pre>
  <hr />
  <h2 id="Uilg">🧪 Check If It Works</h2>
  <p id="53N2">Run a quick container to confirm:</p>
  <pre id="QUhc">docker run -it --rm -v ~/projects:/projects alpine sh</pre>
  <p id="ZNyQ">Then inside the container:</p>
  <pre id="7Me6">ls /projects</pre>
  <p id="WQZ1">If you see your local files, you&#x27;re good to go.</p>
  <hr />
  <h2 id="DJ84">📝 Bonus: Check Version Compatibility</h2>
  <p id="7CJT">Make sure your Colima, Lima, and Docker versions are compatible:</p>
  <pre id="iSCw">colima version
docker --version</pre>
  <p id="DT22">If things still don’t work, check the logs:</p>
  <pre id="p7Ft">colima stop
colima start --verbose</pre>
  <hr />
  <h2 id="OD7O">🧹 Clean Up (If Needed)</h2>
  <p id="bYLo">If <code>sshfs</code> keeps failing, you can try switching to 9p (on newer Colima versions):</p>
  <pre id="Br4B">=colima start --mount-type=9p</pre>
  <p id="qNE9">It&#x27;s faster and doesn’t depend on macFUSE, but compatibility can vary.</p>
  <hr />
  <h2 id="ZGNz">🚀 Conclusion</h2>
  <p id="YMQB">Mount errors with Colima on Apple Silicon are super common but easily fixable with the right setup. Reinstall <code>macFUSE</code>, reboot, and use <code>--mount-type=sshfs</code> — that’s usually all it takes.</p>
  <p id="WzD3">Let me know if you’ve found a more elegant solution or if something still breaks — happy to update the guide!</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/difference-recovery-services-and-backup-vault</guid><link>https://amirkhonov.com/difference-recovery-services-and-backup-vault?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/difference-recovery-services-and-backup-vault?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>Difference Between Recovery Services Vault and Backup Vault in Azure</title><pubDate>Tue, 11 Mar 2025 14:33:42 GMT</pubDate><description><![CDATA[When planning your data protection strategy in Microsoft Azure, you'll encounter two similar-sounding options: Recovery Services Vault and Backup Vault. While both serve data protection purposes, they have distinct features, use cases, and capabilities. This blog post explores the key differences to help you choose the right solution for your needs.]]></description><content:encoded><![CDATA[
  <p id="BDx4">When planning your data protection strategy in Microsoft Azure, you&#x27;ll encounter two similar-sounding options: Recovery Services Vault and Backup Vault. While both serve data protection purposes, they have distinct features, use cases, and capabilities. This blog post explores the key differences to help you choose the right solution for your needs.</p>
  <h2 id="ZXSn">Recovery Services Vault: The Comprehensive Solution</h2>
  <p id="eYM2">Recovery Services Vault is Azure&#x27;s original and more comprehensive data protection solution. It functions as a storage entity that houses backup data for various Azure services.</p>
  <h3 id="BBGR">Key Features:</h3>
  <ul id="8MkK">
    <li id="gBJE"><strong>Dual functionality</strong>: Supports both Azure Backup and Azure Site Recovery</li>
    <li id="2UNE"><strong>Broad resource support</strong>: Protects Azure VMs, SQL in VMs, SAP HANA in VMs, Azure Files, on-premises servers, and more</li>
    <li id="Zsw6"><strong>Cross-region recovery</strong>: Enables disaster recovery scenarios across Azure regions</li>
    <li id="398w"><strong>Legacy compatibility</strong>: Supports older workloads and implementations</li>
  </ul>
  <h3 id="7L7Y">Best For:</h3>
  <p id="bVVg">Organizations requiring both backup and disaster recovery capabilities in a single management interface.</p>
  <h2 id="k1BJ">Backup Vault: The Specialized Alternative</h2>
  <p id="ic6t">Backup Vault is a newer, specialized offering focused exclusively on backup operations with enhanced capabilities.</p>
  <h3 id="miWC">Key Features:</h3>
  <ul id="zRfx">
    <li id="uGrz"><strong>Backup-specific</strong>: Exclusively for backup operations (no disaster recovery functionality)</li>
    <li id="5l43"><strong>Targeted workload support</strong>: Designed for specific workloads like Azure Database for PostgreSQL servers, Azure Blobs, and Azure Disks</li>
    <li id="XJJO"><strong>Enhanced operational efficiency</strong>: Offers more granular backup policies and simplified management</li>
    <li id="pthO"><strong>Cost optimization</strong>: Generally more cost-effective for pure backup scenarios</li>
    <li id="w4xD"><strong>Modern architecture</strong>: Built on newer technologies for better performance</li>
  </ul>
  <h3 id="Tc8B">Best For:</h3>
  <p id="BfQH">Organizations with specific backup needs that don&#x27;t require disaster recovery functionality.</p>
  <h2 id="W68t">Key Differences Summarized</h2>
  <p id="OjwJ"><strong>Primary Function</strong>: Recovery Services Vault handles both backup and disaster recovery, while Backup Vault focuses exclusively on backup operations.</p>
  <p id="IxxO"><strong>Workload Support</strong>: Recovery Services Vault supports a broader range of services including on-premises servers, whereas Backup Vault is specialized for specific Azure services like databases and blob storage.</p>
  <p id="ARAD"><strong>Architecture</strong>: Recovery Services Vault uses the original unified design, while Backup Vault employs a modern, purpose-built architecture optimized for backup operations.</p>
  <p id="3OVp"><strong>Management</strong>: Recovery Services Vault offers more comprehensive but complex management, whereas Backup Vault provides streamlined administration specifically for backup tasks.</p>
  <p id="7zTA"><strong>Cost Structure</strong>: Recovery Services Vault has combined pricing for backup and disaster recovery services, while Backup Vault features optimized pricing specifically for backup operations.</p>
  <h2 id="uf9m">Making the Right Choice</h2>
  <p id="591J">Choose a <strong>Recovery Services Vault</strong> if you:</p>
  <ul id="y0bS">
    <li id="VHpb">Need both backup and disaster recovery capabilities</li>
    <li id="5qEP">Want to protect a wide range of Azure and on-premises workloads</li>
    <li id="4kM6">Have existing deployments already using this solution</li>
  </ul>
  <p id="Nwz8">Choose a <strong>Backup Vault</strong> if you:</p>
  <ul id="QawY">
    <li id="U9sp">Only need backup functionality (no disaster recovery)</li>
    <li id="5Jlt">Are primarily protecting specific Azure database services or disk storage</li>
    <li id="wcJG">Want a more streamlined, cost-effective backup solution</li>
  </ul>
  <p id="TP37">Many organizations employ both solutions, using Recovery Services Vault for comprehensive protection of critical systems and Backup Vault for specialized workloads that only require backup capabilities.</p>
  <p id="phU7">By understanding these differences, you can develop a more efficient, effective, and economical data protection strategy for your Azure environment.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/fixing-sysprep-issues</guid><link>https://amirkhonov.com/fixing-sysprep-issues?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/fixing-sysprep-issues?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>Fixing Sysprep Errors While Building Windows Server Images</title><pubDate>Thu, 10 Oct 2024 16:10:09 GMT</pubDate><tt:hashtag>windows</tt:hashtag><tt:hashtag>sysprep</tt:hashtag><description><![CDATA[When building Windows Server 2019 and 2022 images in Azure, you might encounter Sysprep errors that prevent the process from completing successfully. These errors usually arise during the image generalization phase, and they can be tricky to resolve, as seen in the following error messages:]]></description><content:encoded><![CDATA[
  <p id="bm7y">When building Windows Server 2019 and 2022 images in Azure, you might encounter Sysprep errors that prevent the process from completing successfully. These errors usually arise during the image generalization phase, and they can be tricky to resolve, as seen in the following error messages:</p>
  <pre id="JzGD">azure-arm.windows2022: 2024-10-05 21:34:00, Error SYSPRP MRTGeneralize:98 - ERROR: Failed ConnectServer
azure-arm.windows2022: 2024-10-05 21:34:02, Error SYSPRP BCD: BiUpdateEfiEntry failed c000000d
azure-arm.windows2022: 2024-10-05 21:34:02, Error SYSPRP BCD: BiExportBcdObjects failed c000000d
azure-arm.windows2022: 2024-10-05 21:34:02, Error SYSPRP BCD: BiExportStoreAlterationsToEfi failed c000000d
azure-arm.windows2022: 2024-10-05 21:34:02, Error SYSPRP BCD: Failed to export alterations to firmware. Status: c000000d</pre>
  <p id="EYRT">The previous error message was caught from sysprep logs, which I found using the following condition:</p>
  <pre id="XAPU">if ($imageState -eq &#x27;IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE&#x27;) {
        break
} elseif ($imageState -ne &#x27;IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE&#x27;) {
        $setupActLog = Get-Content &quot;$Env:Windir\System32\Sysprep\Panther\setupact.log&quot; -Raw
        $setupErrLog = Get-Content &quot;$Env:Windir\System32\Sysprep\Panther\setuperr.log&quot; -Raw
        Write-Output &quot;setupact.log:&quot;
        Write-Output $setupActLog
        Write-Output &quot;setuperr.log:&quot;
        Write-Output $setupErrLog
        Write-Output &quot;The unexpected state: $imageState&quot;
        exit 1
}</pre>
  <p id="bhAm">Overall system state must be IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE for creating an image, if not, we need to check the SysPrep logs.</p>
  <h3 id="fY23">The Root Cause: Windows Store and Mini-Setup Delays</h3>
  <p id="Vrfo">Sometimes, the Sysprep errors are caused by Windows Store automatic updates running in the background. The mini-setup phase of Sysprep, which generalizes the image, may experience significant delays or failures when Windows Store services update during this time. Microsoft Premier Support has acknowledged this as a potential issue, but they did not provide a definitive fix.</p>
  <p id="hTMp">The solution involves disabling the Windows Store automatic updates and ensuring the related services are stopped.</p>
  <p id="DMgs">Source: <a href="https://learn.microsoft.com/en-us/answers/questions/333299/windows-10-sysprep" target="_blank">https://learn.microsoft.com/en-us/answers/questions/333299/windows-10-sysprep</a></p>
  <h3 id="UG97">The Solution:<br /></h3>
  <p id="6NKo">First, you need to add a registry key to disable Windows Store automatic updates. Here’s a PowerShell script that handles this task:</p>
  <pre id="ftLR"># Disable Windows Store Automatic Updates
Write-Host &quot;Adding Registry key to Disable Windows Store Automatic Updates&quot;
$registryPath = &quot;HKLM:\SOFTWARE\Policies\Microsoft\WindowsStore&quot;

If (!(Test-Path $registryPath)) {
    Mkdir $registryPath -ErrorAction SilentlyContinue
    New-ItemProperty $registryPath -Name AutoDownload -Value 2
}
Else {
    Set-ItemProperty $registryPath -Name AutoDownload -Value 2
}</pre>
  <p id="AvCj">Next, you need to stop the Windows Store installer service, which could be interfering with the Sysprep process.</p>
  <pre id="ftLR"># Stop WindowsStore Installer Service and set to Disabled
Write-Host &quot;Stopping InstallService&quot;
Stop-Service InstallService</pre>
  <p id="6Rtr">This ensures that the InstallService responsible for handling Windows Store updates is stopped and will not start automatically during the generalization process.</p>
  <p id="BGXe"></p>
  <h3 id="GBSp">Final Thoughts</h3>
  <p id="gx2T">Once these steps are complete, Sysprep should be able to run without encountering BCD errors or other issues related to the Windows Store updates. By disabling Windows Store automatic updates and stopping the associated services, you reduce the likelihood of conflicts during Sysprep’s generalization phase.</p>
  <p id="HR0c">This fix has helped resolve similar issues for other users, and it’s a valuable solution to try if you’re experiencing Sysprep failures in Windows Server 2019 or 2022. Make sure to share this with others facing the same challenge!</p>
  <tt-tags id="LOTF">
    <tt-tag name="windows">#windows</tt-tag>
    <tt-tag name="sysprep">#sysprep</tt-tag>
  </tt-tags>

]]></content:encoded></item><item><guid isPermaLink="true">https://amirkhonov.com/colima-mount-error-fix-on-mac-m1-m2-m3</guid><link>https://amirkhonov.com/colima-mount-error-fix-on-mac-m1-m2-m3?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov</link><comments>https://amirkhonov.com/colima-mount-error-fix-on-mac-m1-m2-m3?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=amirkhonov#comments</comments><dc:creator>amirkhonov</dc:creator><title>Resolving Docker Mount Permission Errors with Colima</title><pubDate>Sat, 01 Jun 2024 16:07:11 GMT</pubDate><description><![CDATA[If you’ve encountered a frustrating error while working with Colima on MacOS, you’re not alone. The error message:]]></description><content:encoded><![CDATA[
  <p id="EpSU">If you’ve encountered a frustrating error while working with Colima on MacOS, you’re not alone. The error message:</p>
  <pre id="tArv">Gracefully stopping... (press Ctrl+C again to force)
Error response from daemon: error while creating mount source path &#x27;/Users/&lt;username&gt;/Projects/GPT/application/core-data&#x27;: chown /Users/&lt;username&gt;/Projects/GPT/application/core-data: operation not permitted</pre>
  <p id="Ms5u">is a common issue faced by developers. It occurs due to permission problems when Colima attempts to access certain directories. Here’s a step-by-step guide on how I resolved this problem using Colima.</p>
  <h3 id="iBj4">Understanding the Issue</h3>
  <p id="CpBK">The core of the issue lies in Colima inability to change the ownership of the specified directory. This usually happens due to restrictive macOS permissions. However, it requires a bit of configuration to handle directory permissions correctly.</p>
  <h3 id="YgDy">Solution Overview</h3>
  <p id="HxkX">To resolve the issue, you need to configure Colima to use the <code>9p</code> mount type and explicitly define the directory mounts both with absolute paths and with the <code>~</code> notation. This ensures that Colima can correctly map and cache the directories, granting the necessary permissions.</p>
  <h3 id="C9RC">Step-by-Step Solution</h3>
  <h4 id="vZAV">Step 1: Update Colima Configuration</h4>
  <p id="kA9n">First, you need to update the Colima configuration file. Open or create the override configuration file at <code>/Users/&lt;username&gt;/.colima/_lima/_config/override.yaml</code>. Replace <code>&lt;username&gt;</code> with your actual macOS username.</p>
  <p id="3mix">Add the following configuration to the file:</p>
  <pre id="d1fm">mountType: 9p
mounts:
  - location: &quot;/Users/&lt;username&gt;&quot;
    writable: true
    9p:
      securityModel: mapped-xattr
      cache: mmap
  - location: &quot;~&quot;
    writable: true
    9p:
      securityModel: mapped-xattr
      cache: mmap
  - location: /tmp/colima
    writable: true
    9p:
      securityModel: mapped-xattr
      cache: mmap
</pre>
  <p id="1mF6">This configuration ensures that both the absolute path and the home directory symbol (<code>~</code>) are correctly mapped and accessible.</p>
  <h4 id="wELd">Step 2: Restart Colima</h4>
  <p id="Ztse">After updating the configuration, restart Colima to apply the changes. Use the following commands in your terminal:</p>
  <pre id="39lv">colima delete
colima start --mount-type 9p
</pre>
  <p id="Zv6K">The <code>colima delete</code> command stops and removes the existing Colima instance, while <code>colima start --mount-type 9p</code> restarts it with the new mount type and configuration.</p>
  <h3 id="HviP">Explanation of Configuration</h3>
  <ul id="9ai4">
    <li id="X3ms"><strong>mountType: 9p</strong>: Specifies the use of the 9p protocol for mounts. This protocol allows for better handling of file permissions and caching.</li>
    <li id="mEgy"><strong>location</strong>: Defines the directories to be mounted. Both the absolute path (<code>/Users/&lt;username&gt;</code>) and the home directory symbol (<code>~</code>) are included to cover all scenarios.</li>
    <li id="YW8k"><strong>writable: true</strong>: Ensures that the directories are writable.</li>
    <li id="8Gqc"><strong>securityModel: mapped-xattr</strong>: Uses extended attributes for security, mapping file permissions appropriately.</li>
    <li id="ONeg"><strong>cache: mmap</strong>: Specifies the caching mechanism to improve performance.</li>
  </ul>
  <h3 id="ETf8">Conclusion</h3>
  <p id="dY1K">By configuring Colima to handle directory mounts correctly, you can resolve permission errors and ensure a smoother development experience on macOS. This guide provides a straightforward solution to a common problem, enabling you to focus on your projects without interruption.</p>
  <p id="sbOc">Feel free to share this guide with your peers and help others overcome the same challenge. Happy coding!</p>

]]></content:encoded></item></channel></rss>