{"id":59721,"date":"2025-07-12T13:57:13","date_gmt":"2025-07-12T13:57:13","guid":{"rendered":"https:\/\/www.europesays.com\/us\/59721\/"},"modified":"2025-07-12T13:57:13","modified_gmt":"2025-07-12T13:57:13","slug":"real-time-analytics-news-for-the-week-ending-july-12","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/59721\/","title":{"rendered":"Real-time Analytics News for the Week Ending July 12"},"content":{"rendered":"<p>                        <img width=\"300\" height=\"200\" src=\"data:image\/svg+xml,%3Csvg%20xmlns=\" http:=\"\" class=\"alignleft wp-post-image\" alt=\"\" decoding=\"async\" fetchpriority=\"high\" data-lazy- data-lazy- data-lazy-src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/06\/1751174890_219_Depositphotos_718319726_S-300x200.jpg\"\/>                <\/p>\n<p>                                                                          <strong wp_automatic_readability=\"3\"><\/p>\n<p>In this week\u2019s real-time analytics news: Amazon Web Services (AWS) announced new capabilities in Sagemaker AI. <\/p>\n<p><\/strong><\/p>\n<p>Keeping pace with news and developments in the real-time analytics and AI market can be a daunting task. Fortunately, we have you covered with a summary of the items our staff comes across each week. And if you prefer it in your inbox,\u00a0<a href=\"https:\/\/www.rtinsights.com\/real-time-pulse\/?utm_campaign=Real%20Time%20Pulse%20Newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-_7JlM8sKpUj0QiJImajXMJb9FYp8rJklS522PLp56h9JqQvhQUBOne8g08bTJA9PaGcR95\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>sign up here<\/strong><\/a>!<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>AWS<\/strong><\/a>\u00a0announced\u00a0new capabilities in<a href=\"https:\/\/aws.amazon.com\/sagemaker-ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">\u00a0Sagemaker AI<\/a>\u00a0to accelerate how customers build and train AI models. The new capabilities include:<\/p>\n<ul class=\"wp-block-list\">\n<li>SageMaker HyperPod observability provides real-time visibility into model development tasks and compute resources, helping customers bring models to market faster by reducing the time to troubleshoot performance issues from days to minutes.<\/li>\n<li>Customers can now easily deploy models from SageMaker JumpStart, as well as fine-tuned custom models, on SageMaker HyperPod for fast, scalable inference.<\/li>\n<li>With new remote connections to SageMaker AI, developers and data scientists can quickly and easily connect to SageMaker AI from their local IDE, maintaining access to the custom tools and familiar workflows that help them work most efficiently.<\/li>\n<\/ul>\n<p>Real-time analytics news in brief<\/p>\n<p><a href=\"https:\/\/www.cerebras.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Cerebras Systems<\/strong><\/a>\u00a0announced new partnerships and integrations with Hugging Face, DataRobot, and Docker. These collaborations dramatically increase accessibility and impact of Cerebras\u2019 AI inference, enabling a new generation of performant, interactive, and intelligent agentic AI applications. For example, now powered by Cerebras inference and deployed with Gradio on Hugging Face Spaces, Hugging Face\u2019s SmolAgents can deliver near-instant responses with dramatically improved interactivity. DataRobot\u2019s Syftr, integrated with Cerebras\u2019 AI inference performance, delivers a toolchain for production-grade agentic apps. And with Docker Compose and Cerebras, developers can spin up powerful, multi-agent AI stacks in seconds.<\/p>\n<p>  <a href=\"https:\/\/cta-service-cms2.hubspot.com\/web-interactives\/public\/v1\/track\/redirect?encryptedPayload=AVxigLLXAjRcEh16bwE93%2ByTdf1TwYAfs7dqYD1qpvh6SC%2BKP82HvUmBULz9AUb%2Fc2oL9ZksPZNTAzisYe9GxtxDhY1fXPwZbtjZxTYIIZHwbNK66Gw%3D&amp;webInteractiveContentId=185856395415&amp;portalId=8019034\" target=\"_blank\" rel=\"noopener nofollow\" crossorigin=\"anonymous\">&#13;<br \/>\n    <img decoding=\"async\" alt=\"PTC \u00a0 Top 5 Reasons You Need an OT Data Strategy \u00a0\" src=\"data:image\/svg+xml,%3Csvg%20xmlns=\" http:=\"\" style=\"height: 100%; width: 100%; object-fit: fill\" onerror=\"this.style.display='none'\" data-lazy-src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/06\/interactive-185856395415.png\"\/>&#13;<br \/>\n  <\/a><\/p>\n<p><a href=\"https:\/\/bitwarden.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Bitwarden<\/strong><\/a> announced the launch of a new Model Context Protocol (MCP) server, enabling secure integration between AI agents and credential workflows. The Bitwarden MCP server operates on a user\u2019s local machine and allows AI assistants to access, generate, retrieve, and manage credentials while preserving zero-knowledge, end-to-end encryption through a local-first architecture.\u00a0<\/p>\n<p><a href=\"https:\/\/www.cognizant.com\/us\/en\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Cognizant<\/strong><\/a>\u00a0announced the launch of\u00a0Cognizant Agent Foundry, an offering designed to help enterprises design, deploy, and orchestrate autonomous AI agents at scale. Cognizant Agent Foundry supports adaptive operations, real-time decision-making, and personalized customer experiences, empowering organizations to embed agentic capabilities1 across workflows.<\/p>\n<p><a href=\"https:\/\/www.denodo.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Denodo<\/strong><\/a> announced the availability of the Denodo DeepQuery capability, now as a private preview, and generally available soon, enabling generative AI (GenAI) to go beyond retrieving facts to investigating, synthesizing, and explaining its reasoning. Denodo also announced the availability of Model Context Protocol (MCP) support as part of the Denodo AI SDK.<\/p>\n<p><a href=\"https:\/\/domino.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Domino Data Lab<\/strong><\/a>\u00a0announced\u00a0the launch of its Vibe Modeling offering. Using the solution, data scientists can leverage Vibe Modeling capabilities to describe their analytical intent and desired outcomes, with AI accelerating the time-consuming experimental phases of model building. The solution is available via Domino\u2019s\u00a0<a href=\"https:\/\/github.com\/dominodatalab\/domino_mcp_server.\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">GitHub repository<\/a>.\u00a0<\/p>\n<p><a href=\"https:\/\/graphwise.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Graphwise<\/strong><\/a>\u00a0announced the availability of GraphDB 11. This latest release makes it easier to integrate with multiple Large Language Models (LLMs) and enables AI applications to deliver more accurate and contextually relevant results. With MCP protocol support, V11 offers swift integration of data in agentic AI ecosystems and enables AI platforms like Microsoft Copilot Studio to tap directly into their enterprise knowledge.<\/p>\n<p><a href=\"https:\/\/hydrolix.io\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Hydrolix<\/strong><\/a>\u00a0announced support for AWS Elemental MediaLive, MediaPackage, and MediaTailor, as well as client-side analytics from Datazoom. The new integrations provide companies with real-time and historical insights into video streaming performance and advertising delivery. The AWS Elemental and Datazoom integrations complement existing integrations with AWS CloudFront and AWS WAF.<\/p>\n<p><a href=\"https:\/\/konghq.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Kong<\/strong><\/a> released\u00a0AI Gateway 3.11,\u00a0expanding its AI infrastructure tool with new capabilities to help organizations grow, secure, and scale GenAI and agent-based systems. The new release includes more than 10 out-of-the-box GenAI capabilities designed to help teams build scalable, multimodal AI agents while cutting token costs and strengthening guardrails.<\/p>\n<p><a href=\"https:\/\/www.liquid.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Liquid AI<\/strong><\/a> announced the launch of its next-generation Liquid Foundation Models (LFM2). Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference, and better generalization, especially in long-context or resource-constrained scenarios. Additionally, Liquid AI open-sourced its LFM2, and LFM2\u2019s weights can now be downloaded from\u00a0Hugging Face\u00a0and are also available through the\u00a0Liquid Playground\u00a0for testing.<\/p>\n<p><a href=\"https:\/\/www.opentext.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>OpenText<\/strong><\/a> introduced\u00a0MyAviator, a secure, personal digital worker built for the enterprise. MyAviator enables individuals to securely interact with their own documents, extract insights, and generate content, all within the trusted OpenText ecosystem. The solution leverages\u00a0agentic AI and the company\u2019s suite of Aviator solutions to drive diverse automation use cases.<\/p>\n<p><a href=\"https:\/\/sambanova.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>SambaNova<\/strong><\/a> announced\u00a0SambaManaged, an inference-optimized data center product offering,\u00a0deployable in just 90 days, which is faster than the typical 18 to 24 months. Designed for rapid deployment, this modular product enables existing data centers to immediately stand up AI inference services with minimal infrastructure modification.<\/p>\n<p><a href=\"https:\/\/www.weka.io\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>WEKA<\/strong><\/a>\u00a0unveiled NeuralMesh Axon, an advanced storage system that leverages a fusion architecture designed to address the fundamental challenges of running exascale AI applications and workloads. NeuralMesh Axon seamlessly fuses with GPU servers and AI factories to streamline deployments, reduce costs, and significantly enhance AI workload responsiveness and performance.<\/p>\n<p>Partnerships, collaborations, and more<\/p>\n<p><a href=\"https:\/\/www.oracle.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Oracle<\/strong><\/a>\u00a0and\u00a0<a href=\"https:\/\/aws.amazon.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Amazon Web Services<\/strong><\/a> announced the general availability of Oracle Database@AWS. Customers can now run Oracle Exadata Database Service and Oracle Autonomous Database on dedicated infrastructure on Oracle Cloud Infrastructure (OCI) within AWS. Oracle Database@AWS is available in the AWS U.S. East (N. Virginia) and U.S. West (Oregon) Regions, with plans to expand availability to 20 additional AWS Regions around the world.<\/p>\n<p>  <a href=\"https:\/\/cta-service-cms2.hubspot.com\/web-interactives\/public\/v1\/track\/redirect?encryptedPayload=AVxigLLXAjRcEh16bwE93%2ByTdf1TwYAfs7dqYD1qpvh6SC%2BKP82HvUmBULz9AUb%2Fc2oL9ZksPZNTAzisYe9GxtxDhY1fXPwZbtjZxTYIIZHwbNK66Gw%3D&amp;webInteractiveContentId=185856395415&amp;portalId=8019034\" target=\"_blank\" rel=\"noopener nofollow\" crossorigin=\"anonymous\">&#13;<br \/>\n    <img decoding=\"async\" alt=\"PTC \u00a0 Top 5 Reasons You Need an OT Data Strategy \u00a0\" src=\"data:image\/svg+xml,%3Csvg%20xmlns=\" http:=\"\" style=\"height: 100%; width: 100%; object-fit: fill\" onerror=\"this.style.display='none'\" data-lazy-src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/06\/interactive-185856395415.png\"\/>&#13;<br \/>\n  <\/a><\/p>\n<p>Additionally, customers can easily migrate their Oracle Database workloads to Oracle Database@AWS running on OCI in AWS while taking advantage of Oracle Real Application Clusters (RAC) and the latest Oracle Database 23ai with embedded AI Vector capabilities. Oracle Database@AWS includes zero-ETL (extract, transform, and load) integration, which simplifies data integration between enterprise Oracle Database services and AWS Analytics services, eliminating the need to build and manage complex data pipelines.<\/p>\n<p><a href=\"https:\/\/www.anaconda.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Anaconda<\/strong><\/a> announced a partnership with <a href=\"https:\/\/prefix.dev\/\" rel=\"nofollow noopener\" target=\"_blank\"><strong>Prefix.dev<\/strong><\/a> to deliver significant performance improvements for conda package creation while maintaining the trusted conda-build experience that enterprises rely on. The enhanced conda-build will leverage Rust-based technology from rattler-build to enable faster package building while ensuring compatibility with existing conda environments and workflows.\u00a0<\/p>\n<p><a href=\"http:\/\/ddn.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>DDN<\/strong><\/a>\u00a0announced that <a href=\"https:\/\/cloud.google.com\/\" rel=\"nofollow noopener\" target=\"_blank\"><strong>Google Cloud<\/strong><\/a> Managed Lustre, a fully managed, high-performance parallel file system service, powered by DDN\u2019s EXAScaler technology, is now generally available. Designed to accelerate the most demanding workloads in AI, HPC, and data-intensive enterprise environments, Google Cloud Managed Lustre brings the power of Lustre natively into the Google Cloud ecosystem.<\/p>\n<p><a href=\"https:\/\/duplocloud.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>DuploCloud<\/strong><\/a> announced a strategic collaboration agreement (SCA) with <a href=\"https:\/\/aws.amazon.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Amazon Web Services<\/strong><\/a> (AWS) to bring a new set of automated DevOps solutions to AWS customers. With this agreement, DuploCloud and AWS will co-develop go-to-market initiatives and technical integrations aimed at enabling startups, generative AI innovators, and regulated industries to launch their products and services faster, while meeting strict compliance frameworks.<\/p>\n<p><a href=\"https:\/\/www.pingcap.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>PingCAP<\/strong><\/a> announced an expanded collaboration with <a href=\"https:\/\/azure.microsoft.com\/en-us\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><strong>Microsoft<\/strong><\/a> to accelerate the adoption of modern data infrastructure across the Microsoft Azure ecosystem. This collaboration brings together PingCAP\u2019s expertise in distributed transactional and analytical systems with Microsoft\u2019s global cloud platform to help organizations build scalable, real-time, and AI-ready applications on Azure.<\/p>\n<p>If your company has real-time analytics news, send your announcements to <a href=\"http:\/\/www.rtinsights.com\/cdn-cgi\/l\/email-protection#d9aaaab8b5b8b4b6b7bc99abadb0b7aab0beb1adaaf7bab6b4\" rel=\"nofollow noopener\" target=\"_blank\">[email\u00a0protected]<\/a>.<\/p>\n<p><strong>In case you missed it, here are our most recent previous weekly real-time analytics news roundups:<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"In this week\u2019s real-time analytics news: Amazon Web Services (AWS) announced new capabilities in Sagemaker AI. Keeping pace&hellip;\n","protected":false},"author":3,"featured_media":23707,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22],"tags":[745,4488,158,67,132,68],"class_list":{"0":"post-59721","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-computing","9":"tag-mcp","10":"tag-technology","11":"tag-united-states","12":"tag-unitedstates","13":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114840616762323043","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/59721","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=59721"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/59721\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/23707"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=59721"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=59721"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=59721"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}