Serdar Yegulalp

About the Author Serdar Yegulalp


What’s new in Kubernetes 1.7

In Kubernetes version 1.7, the container orchestration and management system is gaining slew of new security, stateful application, and extensibility features

Kubernetes 1.6 was mainly about stabilizing and bringing to term long-planned changes, such as using version 3 of the ETCD distributed key-value store. But many of Kubernetes 1.7’s new features are only in the alpha stage, more signals of how Kubernetes is trying to be more useful in a broader range of scenarios. Other new capabilities bring in features previously relegated to other parts of the container ecosystem.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Nvidia’s new TensorRT speeds machine learning predictions

Nvidia has released a new version of TensorRT, a runtime system for serving inferences using deep learning models through Nvidia’s own GPUs.

Inferences, or predictions made from a trained model, can be served from either CPUs or GPUs. Serving inferences from GPUs is part of Nvidia’s strategy to get greater adoption of its processors, countering what AMD is doing to break Nvidia’s stranglehold on the machine learning GPU market.

Nvidia claims the GPU-based TensorRT is better across the board for inferencing than CPU-only approaches. One of Nvidia’s proffered benchmarks, the AlexNet image classification test under the Caffe framework, claims TensorRT to be 42 times faster than a CPU-only version of the same test — 16,041 images per second vs. 374—when run on Nvidia’s Tesla P40 processor. (Always take industry benchmarks with a grain of salt.)

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Nvidia’s new TensorRT speeds machine learning predictions

Nvidia has released a new version of TensorRT, a runtime system for serving inferences using deep learning models through Nvidia’s own GPUs.

Inferences, or predictions made from a trained model, can be served from either CPUs or GPUs. Serving inferences from GPUs is part of Nvidia’s strategy to get greater adoption of its processors, countering what AMD is doing to break Nvidia’s stranglehold on the machine learning GPU market.

Nvidia claims the GPU-based TensorRT is better across the board for inferencing than CPU-only approaches. One of Nvidia’s proffered benchmarks, the AlexNet image classification test under the Caffe framework, claims TensorRT to be 42 times faster than a CPU-only version of the same test — 16,041 images per second vs. 374—when run on Nvidia’s Tesla P40 processor. (Always take industry benchmarks with a grain of salt.)

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Nvidia’s new TensorRT speeds machine learning predictions

Nvidia has released a new version of TensorRT, a runtime system for serving inferences using deep learning models through Nvidia’s own GPUs.

Inferences, or predictions made from a trained model, can be served from either CPUs or GPUs. Serving inferences from GPUs is part of Nvidia’s strategy to get greater adoption of its processors, countering what AMD is doing to break Nvidia’s stranglehold on the machine learning GPU market.

Nvidia claims the GPU-based TensorRT is better across the board for inferencing than CPU-only approaches. One of Nvidia’s proffered benchmarks, the AlexNet image classification test under the Caffe framework, claims TensorRT to be 42 times faster than a CPU-only version of the same test — 16,041 images per second vs. 374—when run on Nvidia’s Tesla P40 processor. (Always take industry benchmarks with a grain of salt.)

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Google open-sources TensorFlow training tools

Over the past year, Google’s TensorFlow has asserted itself as a popular open source toolkit for deep learning. But training a TensorFlow model can be cumbersome and slow—especially when the mission is to take a dataset used by someone else and try to refine the training process it uses. The sheer number of moving parts and variations in any model-training process is enough to make even deep-learning experts take a deep breath.

This week, Google open-sourced a project intended to cut down on the amount of work in configuring a deep learning model for training. Tensor2Tensor, or T2T for short, is a Python-powered workflow organization library for TensorFlow training jobs. It lets developers specify the key elements used in a TensorFlow model and define the relationships among them.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

How Heptio plans to automate away Kubernetes setup pains

Heptio, the commercial Kubernetes outfit founded by two creators of that microservices management framework, has unveiled its first public project for making Kubernetes easier to deploy in the enterprise.

Kubernetes simplifies how apps run as microservices, but setting up Kubernetes itself is no picnic. Heptio’s project automates some of the fiddlier parts of the setup process via a custom, domain-specific language.

Heptio’s project, dubbed Ksonnet, is an open source tool set for assembling the configuration needed to deploy Kubernetes. The most common setup difficulties in Kubernetes involve creating the configuration files, what Heptio calls the “wall of YAML” problem.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Atom editor now has native GitHub integration

GitHub’s Atom, the Node.js- and HTML5-powered code editor, has traditionally integrated with Git repositories—including GitHub itself—only by way of third-party components.

All that changed this week with the unveiling of a new core package for Atom, called appropriately enough GitHub for Atom, along with new release and beta editions of Atom itself.

GitHub users, dock here

The new edition of Atom, version 1.17, introduces a new UI component called “docks,” which is a way to provide side- or bottom-dockable tool panels in the editor. IDEs like Visual Studio and Eclipse have had dock-like components for some time, but now Atom is adding such a component as a core element.

To read this article in full or to leave a comment, please click here

Read more 0 Comments