Sonata Software dividend, bonus date announced—check out record date, payment date

Sonata Software dividend record date, Sonata Software dividend payment date, Software bonus news: Sonata Software announced on Wednesday, October 25, a 700 per cent dividend payout and a bonus issue for its shareholders, along with its September quarter earnings, post-market hours. Sonata Software declared an interim dividend of Rs 7 per share of a face value of Rs 1, i.e., a 700 per cent payout for the financial year 2023–24.

Sonata Software dividend record date

The record date for the purpose of payment of the interim dividend has been fixed as November 7, 2023.

Sonata Software dividend payment date

As per the company’s communiqué, the interim dividend will be paid to the registered shareholders on or after November 22, 2023, through electronic mode or by dividend warrants, as applicable.

Sonata Software bonus news

In addition, the board has also approved and recommended a bonus issue of one equity share for every one equity share held by the shareholders of the company as on the record date, ie, in the ratio of 1:1.

“The bonus issue of equity shares will be subject to approval by the shareholders through postal ballot and any other applicable statutory law
and regulatory approvals,” the company’s regulatory filing read.

“The bonus shares, once allocated, shall rank pari-passu in all respects and carry the same rights as the existing equity shares and shall be entitled to participate in full in any dividend and other corporate action, recommended and declared after the new equity shares are allocated,” the filing further said.

Catch the latest stock market updates here. For all other news related to business, politics, tech, sports and auto, visit Zeebiz.com.

NVIDIA Brings Generative AI to Millions, With Tensor Core GPUs, LLMs, Tools for RTX PCs and Workstations

Leading AI Platform Gets RTX-Accelerated Boost From New GeForce RTX SUPER GPUs, AI Laptops From Every Top Manufacturer

CES — NVIDIA today announced GeForce RTX SUPER desktop GPUs for supercharged generative AI performance, new AI laptops from every top manufacturer, and new NVIDIA RTX-accelerated AI software and tools for both developers and consumers.

Building on decades of PC leadership, with over 100 million of its RTX GPUs driving the AI ​​PC era, NVIDIA is now offering these tools to enhance PC experiences with generative AI: NVIDIA TensorRT acceleration of the popular Stable Diffusion XL model for text-to-image workflows, NVIDIA RTX Remix with generative AI texture tools, NVIDIA ACE microservices and more games that use DLSS 3 technology with Frame Generation.

AI Workbench, a unified, easy-to-use toolkit for AI developers, will be available in beta later this month. In addition, NVIDIA TensorRT-LLM (TRT-LLM), an open-source library that accelerates and optimizes inference performance of the latest large language models (LLMs), now supports more pre-optimized models for PCs. Accelerated by TRT-LLM, Chat with RTXan NVIDIA tech demo also releasing this month, allows AI enthusiasts to interact with their notes, documents and other content.

“Generative AI is the single most significant platform transition in computing history and will transform every industry, including gaming,” said Jensen Huang, founder and CEO of NVIDIA. “With over 100 million RTX AI PCs and workstations, NVIDIA is a massive installed base for developers and gamers to enjoy the magic of generative AI.”

Running generative AI locally on a PC is critical for privacy, latency and cost-sensitive applications. It requires a large installed base of AI-ready systems, as well as the right developer tools to tune and optimize AI models for the PC platform.

To meet these needs, NVIDIA is delivering innovations across its full technology stack, driving new experiences and building on the 500+ AI-enabled PC applications and games already accelerated by NVIDIA RTX technology.

RTX AI PCs and Workstations
NVIDIA RTX GPUs — capable of running a broad range of applications at the highest performance — unlock the full potential of generative AI on PCs. Tensor Cores in these GPUs dramatically speed AI performance across the most demanding applications for work and play.

The new GeForce RTX 40 SUPER Series graphics cards, also announced today at CES, include the GeForce RTX 4080 SUPER, 4070 Ti SUPER and 4070 SUPER for top AI performance. The GeForce RTX 4080 SUPER generates AI videos 1.5x faster — and images 1.7x faster — than the GeForce RTX 3080 Ti GPU. The Tensor Cores in SUPER GPUs deliver up to 836 trillion operations per second, bringing transformative AI capabilities to gaming, creating and everyday productivity.

Leading manufacturers — including Acer, ASUS, Dell, HP, Lenovo, MSI, Razer and Samsung — are releasing a new wave of RTX AI laptops, bringing a full set of generative AI capabilities to users right out of the box. The new systems, which deliver a performance increase ranging from 20x-60x compared with using neural processing units, will start shipping this month.

Mobile workstations with RTX GPUs can run NVIDIA AI Enterprise software, including TensorRT and NVIDIA RAPIDS for simplified, secure generative AI and data science development. A three-year license for NVIDIA AI Enterprise is included with every NVIDIA A800 40GB Active GPUproviding an ideal workstation development platform for AI and data science.

New PC Developer Tools for Building AI Models
To help developers quickly create, test and customize pretrained generative AI models and LLMs using PC-class performance and memory footprint, NVIDIA recently announced NVIDIA AI Workbench.

AI Workbench, which will be available in beta later this month, offers streamlined access to popular repositories like Hugging Face, GitHub and NVIDIA NGC, along with a simplified user interface that enables developers to easily reproduce, collaborate on and migrate projects.

Projects can be scaled out to virtually anywhere — whether the data center, a public cloud or NVIDIA DGX Cloud — and then brought back to local RTX systems on a PC or workstation for inference and light customization.

In collaboration with HP, NVIDIA is also simplifying AI model development by integrating NVIDIA AI Foundation Models and Endpointswhich includes RTX-accelerated AI models and software development kits, into the HP AI Studio, a centralized platform for data science. This will allow users to easily search, import and deploy optimized models across PCs and the cloud.

After building AI models for PC use cases, developers can optimize them using NVIDIA TensorRT to take full advantage of RTX GPUs’ Tensor Cores.

NVIDIA recently extended TensorRT to text-based applications with TensorRT-LLM for Windows, an open-source library for accelerating LLMs. The latest update to TensorRT-LLM, available now, adds Phi-2 to the growing list of pre-optimized models for PC, which run up to 5x faster compared to other inference backends.

RTX-Accelerated Generative AI Powers New PC Experiences
At CES, NVIDIA and its developer partners are releasing new generative AI-powered applications and services for PCs, including:

  • NVIDIA RTX Remixa platform for creating stunning RTX remasters of classic games. Releasing in beta later this month, it delivers generative AI tools that can transform basic textures from classic games into modern, 4K-resolution, physically based rendering materials.
  • NVIDIA ACE microservices, including generative AI-powered speech and animation models, which enable developers to add intelligent, dynamic digital avatars to games.
  • TensorRT acceleration for Stable Diffusion XL (SDXL) Turbo and latent consistency models, two of the most popular Stable Diffusion acceleration methods. TensorRT improves performance for both by up to 60% compared with the previous fastest implementation. An updated version of the Stable Diffusion WebUI TensorRT extensions are also now available, including acceleration for SDXL, SDXL Turbo, LCM – Low-Rank Adaptation (LoRA) and improved LoRA support.
  • NVIDIA DLSS 3 with Frame Generation, which uses AI to increase frame rates up to 4x compared with native rendering, will be featured in a dozen of the 14 new RTX games announced, including Horizon Forbidden West, Pax Dee and Dragon’s Dogma 2.
  • Chat with RTX, an NVIDIA tech demo available later this month, allows AI enthusiasts to easily connect PC LLMs to their own data using a popular technique known as retrieval-augmented generation (RAG). The demo, accelerated by TensorRT-LLM, enables users to quickly interact with their notes, documents and other content. It will also be available as an open-source reference project, so developers can easily implement the same capabilities in their own applications.

Learn more about the latest generative AI breakthroughs by joining NVIDIA at CES.

General Motors, Magna, Wipro team up for automotive software marketplace | Company News

General Motors, auto parts supplier Magna and IT company Wipro said on Tuesday they were working together to create a sales platform to buy and sell automotive software.

The joint venture, SDVerse, will link the buyers and sellers through a digital platform, where the software’s features and attributes can be listed.

Wipro, in a regulatory filing, disclosed an investment of $5.85 million, equating to a 27% stake in SDVerse. GM and Magna will hold a 46% and 27% stake, respectively.

The transaction is expected to be completed before the end of March.

The platform is in its development stage and is expected to feature hundreds of automotive software products and participants across the industry.

The announcement comes at a time when automakers are ramping up their tech investments to help create connected vehicles with advanced driver aids.

“The market for automotive software is expected to nearly double this decade, potentially outpacing the growth of software development talent pools,” said Harmeet Chauhan, global head Wipro Engineering Edge, Wipro Ltd.

While the company did not detail specific revenue targets, SDVerse will follow an annual subscription fee model and not charge any fees for buying or selling products.

GM, Magna and Wipro will hold seats on the board of SDVerse but the platforms will operate independently.

First Published: Mar 05 2024 | 11:43 PM IST

Jefferies Suggests 2 Software Stocks to Buy

We all know how the technology sector led the way in last year’s market gains, with the mega-cap ‘Magnificent 7’ taking up the lion’s share of the headlines. But the giant technology stocks weren’t the only story in town, and software stocks, riding the AI ​​wave, reaped their own share of the gains.

According to Gartner, the global IT spend is likely to hit $5 trillion this year. That represents a jump of 6.8% from last year, and a hefty opportunity for software companies able to hitch a ride on the way up. In that case, AI could be the key. AI, especially generative AI, is helping to drive the increase in technology and software spending, creating openings for software companies across a multitude of industries.

Watching these developments from the investment bank Jefferies, 5-star analyst Surinder Thind has been busy recently finding compelling investment choices in the software sector. Thind’s overall thesis is based on valuation; in suggesting these software stocks to buy, he points out that they are ‘too attractive to ignore.’

Keeping this perspective in focus, Thind identifies two standout software stocks with significant potential. Let’s delve into Thind’s insights on these stocks while also leveraging the TipRanks platform to gauge the broader sentiment across Wall Street.

Similarweb (SMWB)

Similarweb, the first stock on today’s list, lives and operates in the digital economy. The company offers its customers a platform and tools to develop accurate, comprehensive data analytics, essential for effective marketing in the online world. Similarweb’s services power effective digital research, shopper intelligence, sales intelligence, and digital marketing.

The services are designed to give users the combination of data and insight needed to score sales wins, and Similarweb makes systematic use of AI technology to tailor results and analytics to the users’ specific needs.

Every company, at every scale, needs solid online data, and Similarweb has seen high success in attracting big-name enterprise customers. The company boasts such names as Walmart, Adidas, Pepsico, and DHL among its client base.

Like many high-tech startups that have since gone public, Similarweb has seen its shares fall since entering the stock market. SMWB started trading on Wall Street in May of 2021; since then, the shares have fallen by 70%.

But – there might be positive news for investors. In its most recently reported quarter, 4Q23, Similarweb showcased a net non-GAAP earnings per share of $0.06, exceeding the forecast by 6 cents per share. Moreover, Similarweb achieved revenues of $56.8 million, reflecting a 10.7% year-over-year growth and approximately a million dollars higher than anticipated.

Turning to the Jefferies view, we find analyst Thind outlining the company’s solid position in the data niche: “Even though SMWB operates in a highly competitive market, it provides data and insights at a scale that is difficult to replicate, which makes for a reasonably wide moat. The overall product offering is of high quality and it has the potential to become core to many companies’ digital strategies. This can be seen in the company’s ability to continue to attract large new customers this year (ie, 18 new clients >$100K ARR through 3Q) despite the challenging environment. At this point, mgmt believes it has penetrated <1% of its $44B TAM.”

Thind goes on to show just how this stock is a good buy for investors, by the numbers, writing, “The company is currently trading at a 2025 EV/Rev of 1.6x, which is well below the peer group average of 3.2x. With fundamentals poised to improve, we expect the stock to begin re-rating higher.”

Quantifying his stance, Thind assumes coverage of SMWB shares with a Buy rating and a $10 price target that suggests a one-year upside potential of 50%. (To watch Thind’s track record, click here)

There was little action on the Street heading SMWB over the past 3 months, with Thind being the sole analyst offering insights into the web analytics company’s future prospects. (See Similarweb stock forecast)

ZoomInfo Technologies (ZI)

Next up is ZoomInfo, a software company in the cloud computing niche. The company’s chief product is a marketing-oriented search engine, designed to connect client companies with their own customers directly when the customers are ready to make purchases. ZoomInfo’s platform gives its users data-backed marketing insights crafted to align sales and marketing teams with their targeting audiences. The analytics streamline users’ go-to-market processes for greater productivity and efficiency and consequent stronger growth.

ZoomInfo has found wide acceptance around the world and maintains multiple offices. The company is based in Vancouver, Washington, and also keeps offices in such major US hubs as Atlanta and Boston, as well as reaching out to second-tier locations such as Bethesda, Maryland, in the DC suburbs, and Grand Rapids, Michigan. Internationally, ZoomInfo has locations in London, Toronto, and Chennai, India.

In today’s business world, everyone needs data. It’s inescapable. ZoomInfo has leveraged that truth to build an enterprise client list with more than 35,000 names, including such major figures as Deloitte, FedEx, Duracell, and Bank of America. Users of ZoomInfo’s service have reported an overall 70% decrease in marketing spend and a 63% increase in productivity.

Yesterday, the company announced its financial results for Q4. Investors responded favorably, propelling the stock 14% higher in Tuesday’s trading session.

The top line in that report came to $316.4 million, $5.86 million better than the estimates and up 4.9% year-over-year. The company’s non-GAAP EPS figure, of 26 cents per share, was a penny ahead of the forecast.

Looking ahead, the company anticipates revenue in the range of $1.26 billion to $1.28 billion (compared to the consensus of $1.27 billion) and adjusted net income per share between $0.99 and $1.01 (compared to the consensus of $0.99) for full-year 2024.

Among the bulls is Surinder Thind, who initiated coverage of this stock at a Buy rating – and with a $24 price target implying a 31% upside to the shares for the next 12 months (To watch Thind’s track record, click here)

Thind went on to back his stance by outlining the company’s solid prospects over the next couple of years: “Strong new customer growth is being masked by reductions at existing customers, which is only now stabilizing after coming off pandemic spending highs. 2024 should mark trough LSD growth, advancing to a HSD exit rate and high-teens growth in 2025. Meanwhile, strong margins, solid FCF, and favorable secular trends remain underappreciated given top-line challenges… We believe ZI has the potential to grow revenues organically at a high teens pace in 2025, and get this above +20% y/y in 2026. This acceleration in revenues should lead to multiple expansion.”

What does the rest of the Street think? Looking at the consensus breakdown, opinions from other analysts are more spread out. 5 Buys, 4 Holds and 2 Sells add up to a Moderate Buy consensus. In addition, the $21.05 average price target indicates ~15% upside potential. (See ZoomInfo stock forecast)

To find good ideas for stocks trading at attractive valuations, visit TipRanks’ Best Stocks to Buy, a tool that unites all of TipRanks’ equity insights.

Disclaimer: The opinions expressed in this article are solely those of the featured analysts. The content is intended to be used for informational purposes only. It is very important to do your own analysis before making any investment.

ServiceNow, Hugging Face, and NVIDIA Release New Open-Access LLMs to Help Developers Tap Generative AI for Building Enterprise Applications

StarCoder2 — Created With the BigCode Community and Trained on 600+ Programming Languages ​​— Advances Code Generation, Transparency, Governance, and Innovation

ServiceNow (NYSE: NOW), Hugging Face, and NVIDIA today announced the release of StarCoder2, a family of open-access large language models for code generation that sets new standards for performance, transparency, and cost-effectiveness.

StarCoder2 was developed in partnership with the BigCode Community, managed by ServiceNowthe leading digital workflow company making the world work better for everyone, and Hugging Facethe most-used open-source platform, where the machine learning community collaborates on models, datasets, and applications.

Trained on 619 programming languages, StarCoder2 can be further trained and embedded in enterprise applications to perform specialized tasks such as application source code generation, workflow generation, text summarization, and more. Developers can use its code completion, advanced code summarization, code snippets retrieval, and other capabilities to accelerate innovation and improve productivity.

StarCoder2 offers three model sizes: a 3-billion-parameter model trained by ServiceNow; a 7-billion-parameter model trained by Hugging Face; and a 15-billion-parameter model built by NVIDIA with NVIDIA NeMo and trained on NVIDIA accelerated infrastructure. The smaller variants provide powerful performance while saving on computing costs, as fewer parameters require less computing during inference. In fact, the new 3-billion-parameter model matches the performance of the original StarCoder 15-billion-parameter model.

“StarCoder2 stands as a testament to the combined power of open scientific collaboration and responsible AI practices with an ethical data supply chain,” emphasizes Harm de Vries, lead of ServiceNows StarCoder2 development team and co-lead of BigCode. “The state-of-the-art open-access model improves on prior generative AI performance to increase developer productivity and provides developers equal access to the benefits of code generation AI, which in turn enables organizations of any size to more easily meet their full business potential.”

“The joint efforts led by Hugging Face, ServiceNow, and NVIDIA enable the release of powerful base models that empower the community to build a wide range of applications more efficiently with full data and training transparency,” said Leandro von Werra, machine learning engineer at Hugging Face and co‑lead of BigCode. “StarCoder2 is a testament to the potential of open source and open science as we work toward democratizing responsible AI.”

“Since every software ecosystem has a proprietary programming language, code LLMs can drive breakthroughs in efficiency and innovation in every industry,” said Jonathan Cohen, vice president of applied research at NVIDIA. “NVIDIA’s collaboration with ServiceNow and Hugging Face introduces secure, responsibly developed models and supports broader access to accountable generative AI that we believe will benefit the global community.”

StarCoder2 Models Supercharge Custom Application Development

StarCoder2 models share a state-of-the-art architecture and carefully curated data sources from BigCode that prioritize transparency and open governance to enable responsible innovation at scale.

StarCoder2 advances the potential of future AI-driven coding applications, including text-to-code and text-to-workflow capabilities. With broader, deeper programming training, it provides repository context, enabling accurate, context-aware predictions. These advancements serve seasoned software engineers and citizen developers alike, accelerating business value and digital transformation.

The foundation of StarCoder2 is a new code dataset called Stacks v2, which is more than 7x larger than Stack v1. In addition to the advanced dataset, new training techniques help the model understand low-resource programming languages ​​(such as COBOL), mathematics, and program source code discussions.

Fine-Tuning Advances Capabilities With Business-Specific Data

Users can fine-tune the open-access StarCoder2 models with industry- or organization-specific data using open-source tools such as NVIDIA NeMo or Hugging Face TRL. They can create advanced chatbots to handle more complex summarization or classification tasks, develop personalized coding assistants that can quickly and easily complete programming tasks, retrieve relevant code snippets, and enable text-to-workflow capabilities.

Organizations have already begun to fine-tune the foundational StarCoder model to create specialized task-specific capabilities for their businesses.

ServiceNow’s text-to-code Now LLM was purpose-built on a specialized version of the 15-billion-parameter StarCoder LLM, fine-tuned and trained for its workflow patterns, use cases, and processes. Hugging Face has also used the model to create its StarChat assistant.

BigCode Fosters Open Scientific Collaboration in AI

BigCode represents an open scientific collaboration led by Hugging Face and ServiceNow, dedicated to the responsible development of LLMs for code.

The BigCode community actively participates in the technical aspects of the StarCoder2 project through working groups and task forces, leveraging ServiceNow’s Fast LLM framework to train the 3-billion-parameter model, Hugging Face’s nanotron framework for the 7-billion-parameter model and the NVIDIA NeMo cloud-native framework and NVIDIA TensorRT-LLM software to train and optimize the 15-billion-parameter model.

Fostering responsible innovation is at the core of BigCode’s purpose, demonstrated through its open governance, transparent supply chain, use of open-source software, and the ability for developers to opt data out of training. StarCoder2 was built using responsibly sourced data under license from the digital commons of Heritage Softwarehosted by Inria.

“StarCoder2 is the first code generation AI model developed using the Software Heritage source code archive and built to align with our policies for responsible development of models for code,” stated Roberto Di Cosmo, director at Software Heritage. “The collaboration of ServiceNow, Hugging Face, and NVIDIA exemplifies a shared commitment to ethical AI development, advancing technology for the greater good.”

StarCoder2, like its predecessor, will be made available under the BigCode Open RAIL-M license, allowing royalty-free access and use. Further fostering transparency and collaboration, the model’s supporting code will continue to reside on the BigCode project’s GitHub page.

All StarCoder2 models will also be available for download from Hugging Face, and the StarCoder2 15-billion-parameter model is available on NVIDIA AI Foundation models for developers to experiment directly from their browser, or through an API endpoint.

For more information on StarCoder2, visit https://huggingface.co/bigcode.

Rail News – Browse news from Hanson and Railroad Software. For Railroad Career Professionals

Rail News Home