DevOpsANGLE Mon, 21 Apr 2014 18:50:11 +0000 en-US hourly 1 Google’s Project Tango finds a new territory: the NASA Space Station Mon, 21 Apr 2014 18:49:49 +0000 Continue reading

Google’s Project Tango finds a new territory: the NASA Space Station is a post from: DevOpsANGLE

nasa-logoGoogle’s technology will soon enter into new territory: space.

NASA has announced plans to take advantage of Project Tango on board  the International Space Station to guide SPHERES robots’ navigation within the facility. SPHERES are zero-gravity autonomous machines that are being developed to serve as robotic assistants to help astronauts and independently perform tasks on the ISS.

Until now, the small automata have been equipped with sensors capable of detecting their position thanks to the interaction with the sounds emitted from the speakers on the walls. It is a complex system for localization using triangulation and limited to an area equal to a little more than 6 cubic meters.

For almost a year, ATAP has been working with a team at the NASA Ames Research Center to integrate a Tango prototype into robots that work inside the International Space Station.

The project, initiated by Google, allows researchers to create a 3D map of the environment surrounding via infrared, allowing the robot to move around and avoid obstacles in their path without having to constantly correct trajectory. As the robots use CO2 jets to control movement, the less carbon-dioxide emitted into the confined space the easier it is for astronauts to control oxygen levels.

More importantly, with this project update the SPHERES unit shall not be bound to a defined space, but will be free to move anywhere, thus greatly extending the usefulness for the crew.

According to the Google ATAP team, for the first time in history of space program, this program will enable autonomous navigation of a floating robotic platform 230 miles above the surface of the Earth.

“The development that we’re doing is just getting started. And this is the first device that we’ve built,” said Joel Hesch, an ATAP software engineer. “If you can do sensor fusion and perception on a mobile phone, you can enable so many use cases that can be used on other devices like SPHERES, that benefit the lives of people, that can really impact in a way that wasn’t possible before.”

In the future, the same technology behind Google’s Project Tango can also be used with different automata, as pointed out by the project manager Chris Provencher. Robonaut for example, a robot with human features, could assist astronauts during exploration missions or dangerous interventions on board or outside the space station.

Project Tango

Google unveiled Project Tango in February, showcasing a prototype smartphone with sensors that can identify and manage areas and volumes, thus obtaining a prominent role not only for geolocation but also for finding positions in indoor environments.

The project opens the door to professional developers willing to push beyond their own opportunities. The project enables developers to create assistance systems for the blind, new augmented reality solutions, new types of gaming, analysis systems for environments interior designs, measurements, and more, and applications to locate a product in a supermarket. The space program by NASA is just the start of a new beginning.

Google’s Project Tango finds a new territory: the NASA Space Station is a post from: DevOpsANGLE

]]> 0
Developers movin’ on top: How open-source + cloud changed the landscape Thu, 17 Apr 2014 16:10:51 +0000 Continue reading

Developers movin’ on top: How open-source + cloud changed the landscape is a post from: DevOpsANGLE

medium_5192712495One of the biggest ongoing conversations in tech right now pertains to the shifting role of developers, and what this will mean for IT departments. A recent survey by Puppet Labs shows that developers are becoming so influential in shaping products and user experience, that business success demands an understanding of just how important a role they play.

Not so long ago, the boot was on the other foot. It used to be that enterprises, with their enormous purchasing power, were the biggest consumers of IT technology. Software and hardware were far more expensive than they are now, with almost everything needed to build a simple website, be it operation systems, development tools and servers, only available through a commercial license.

Back then, developers could only work within their employer’s means. But things have drastically changed since then, firstly with so much software being open-sourced and made readily available, and later, with the evolution of the cloud market dispensing with the need for hardware.

Cloud-based innovation


The expanding cloud has totally disrupted the power structure within IT. Individual developers simply aren’t in a position to afford a dedicated server, and while shared hosting was perhaps a viable option, it doesn’t come close to the power of the cloud. Within the cloud, all hardware is virtualized and run by a hypervisor that can perform numerous tasks at once, administering servers, and creating partitions of CPU, memory, storage and more.

With no competition for resources among users because everyone gets their own virtual server instance, it appears to developers as if they have their own dedicated server. This affords almost unlimited options and flexibility, vastly improving the agility of those businesses that are willing to reach for the cloud.

Open-source empowering developers


Just as important as the growth of cloud is the widespread development and adoption of open-source. The ready availability of numerous free software projects has had a massive impact on IT that few would have foreseen 10 or 15 years ago.

This is something Red Hat CEO Jim Whitehurst alluded to when talking with SiliconANGLE founder John Furrier on theCUBE at Red Hat Summit 2014. Whitehurst explained how we’re seeing two major phenomenon happening at the same time: the birth and growth of the big Web 2.0 companies, which drives the explosion of open source codes, and the growing demand that’s now being driven by consumers.

“Before, most code came from large enterprises. Large web 2.0 companies use open source to drive innovation,” stated Whitehurst.

“If you look at whether its DevOps, continuous deployment, those are all things that have come out of the Web 2.0 movement which is all built on open source.”


This shift towards open-source is empowering developers like never before. It’s not that developers just control applications and the code – these days they control the entire infrastructure, and that’s led to increasing integration of developers and IT. In the future, IT departments will revolve almost entirely around developers, further accelerating innovations in the cloud. Pretty soon, almost every major enterprise is going to shift its IT into the cloud, and it’ll be the developers who push them there.

When it comes to information technology, it’s developers who’ve become the real decision makers. And those employers who’re willing to accept this new reality will fare much, much better than those that don’t.

photo credit: PhOtOnQuAnTiQuE via photopin cc

Developers movin’ on top: How open-source + cloud changed the landscape is a post from: DevOpsANGLE

]]> 0
DevOps behind emergence of continuous innovation, delivery becoming standard practice | #RHSummit Wed, 16 Apr 2014 23:45:30 +0000 Continue reading

DevOps behind emergence of continuous innovation, delivery becoming standard practice | #RHSummit is a post from: DevOpsANGLE

Lego DeliveryAsk anyone in DevOps about the unicorns in their field and they will tell you about the companies that have embraced the concept of allowing their developers to conceive, write and deploy their code quickly and continually. At Facebook and Google and Amazon, the mantra is ‘Fail Fast’. Joining John Furrier and Stu Miniman at this year’s Red Hat Summit, broadcast live on SiliconANGLE’s theCUBE, was Gene Kim, founder of Tripwire, Inc. and co-author of The Phoneix Project: A Novel about IT, DevOps and Helping Your Business Win.

Even though the field of DevOps could be considered cutting or even bleeding edge, it finds relevancy in most every conversation being held at this year’s summit, in light of all of the innovation and growth we are currently witnessing. Furrier began by asking Kim to share his perspective on DevOps as it stands today and where he sees it evolving in the coming years.

“We go to all these conferences and you surround yourself with the best thinkers and practitioners in the space,” Kim began. “I love seeing DevOps is such a main part of the program here at Red Hat. Even mainstream developers care about the downstream code and that it works properly. That warms my heart.” Kim believes we are currently watching DevOps create a shift in how business at all levels will be conducted. “What we are observing is the emergence of continuous innovation and continuous delivery as a standard practice,” he noted. “It’s not just for Amazon and Google and Etsy and Netflix. This is for any developer that wants to have fun doing their job.”

Watch the interview in its entirety here:

Considering the old procurement models, Kim believes this was the time DevOps required to come into their own. “What we all want is fast feedback,” he said. “No one actually achieves their goals when it takes 6 weeks or 6 months to determine whether our code even runs.” To achieve this continuous innovation and delivery, a practice has to be employed that once would have struck a nerve in Kim. “[It] involves something that I would have thought was immoral: developers doing their own deploys. I think that’s kind of the end state for both development and operations.”

There are a lot of buzzwords associated with the field of DevOps. Furrier asked Kim to address the definition of DevOps and speak about why that definition is so very broad right now.

Kim acceded that drilling a definition down precisely is actually a difficult undertaking. “It’s not what you do. It’s the outcome,” he stated. “A great DevOps shop has fast flow of features and production where they can very quickly go from code being written to code deployed and code running. This is where you get hundreds or thousands of deploys per day.” In early days, to have that level of agility, both security and reliability were often sacrificed. If reliability was more important, then agility was an impossibility. “[Today], they can do that and have world-class stability, reliability and availability and security.”

The conversation then shifted to asking Kim if he could identify the ‘lightning moment’ for DevOps. “For me, it was 2007,” Kim stated. “I was with a friend who was CTO of AOL. We were talking about the Ops problem of when Ops can’t upgrade from 2.4 to 2.6 kernel in Linux. And he says to me, ‘That’s not a Dev problem. That’s not an Ops problem. That’s my boss’ boss’ biggest problem’.” According to Kim, that statement  was, for him, the a-ha moment that the problem DevOps solved was not just Ops or Dev, it helped the people and the businesses that they serve.

Another story shared by Kim highlighted how IT and DevOps are not only driving their businesses to increased agility and reliability, but the lifecycle of the developer is moving at a faster pace as well. His anecdote shared the fear of a Director at Intel who said his time in both fabrication and IT were basically similar but that what kept him up nights was the human factor. In 22 years in fabrication, he had seen employees who slowly were made redundant. In just 2 years over IT, he claimed an employee’s irrelevance, thanks to lightning-fast advancements, could be realized practically overnight.

While Kim’s first realization of the importance of DevOps occurred some seven years ago, it is clear this field is coming into its own for more organizations than just the unicorns of Facebook, Google and Amazon. Today, we are seeing a tectonic shift in the way business is conducted across all industries and the import of a dedicated and talented DevOps team will be paramount for the success of those companies.

DevOps behind emergence of continuous innovation, delivery becoming standard practice | #RHSummit is a post from: DevOpsANGLE

]]> 0
Space programs join open source community, NASA releases source code catalogue Wed, 16 Apr 2014 20:32:28 +0000 Continue reading

Space programs join open source community, NASA releases source code catalogue is a post from: DevOpsANGLE

nasa-logoOne small step for NASA, a great gift for the open source community. The National Aeronautics And Space Administration (NASA) plans to release the source code of some 1,000 software projects this week. The U.S. space agency wants to set up a website on which they will publish the data, reports the technology magazine Wired.

The move follows a similar initiative in 2009 to provide free access to the code which ran systems on the Apollo 11 moon landings, and other open-source projects by US government agencies like Defense Advanced Research Projects Agency (DARPA).

The agency, in line with the White House’s Open Government policy, is also aiming to make the free online catalogue of more than 1,000 projects one of its most easily accessible. According to Dan Lockney, NASA’s Technology Transfer Program executive, the release is not a historical archive, but a collection of recent software solutions developed by the space agency.

William Eshagh, who is spearheading the project, said in open source blog that NASA is first focusing on providing a home for the current state of open source at the Agency. This includes guidance on how to engage the open source process, points of contact, and a directory of existing projects. The second phase will concentrate on providing a robust forum for ongoing discussion of open source concepts, policies, and projects at the Agency.

“In our third phase, we will turn to the tools and mechanisms development projects generally need to be successful, such as distributed version control, issue tracking, continuous integration, documentation, communication, and planning/management. During this phase, we will create and host a tool, service, and process chain to further lower the burden to going open,” he added.

The release will cover 15 broad categories offering a wide variety of applications for use by industry, academia, other government agencies and the general public. The catalog will cover project management systems, design tools, data handling and image processing, as well as solutions for life-support functions, aeronautics, structural analysis, and robotic and autonomous systems.

Encouraging innovation and entrepreneurship

Did you know, for example, that NASA has developed the first CAD software? NASA developers created programs that became important not just to NASA but for the entire world. Even software for controlling missiles and robot control should be among them. And even some of what falls within the area of ​​artificial intelligence.

The U.S. government is the largest producer of public-domain software in the U.S., but until now it also very careless in the utilization of science and the public. The aim of the government to go open source is to have developers and private industry pick up the free resource and run with it, developing new commercial uses and variations.

One of the main goals of the database is to help develop technology that can be transferred to other sectors. They hope it will help hackers and entrepreneurs push their existing ideas in new directions as well as help trigger new concepts.

The catalog of software will be in the PDF format initially. A printed version will be released on 21 May and in the following weeks and months to a searchable database and an archive will be built.

“Software is an increasingly important element of the agency’s intellectual asset portfolio, making up about a third of our reported inventions every year,” says the post. “We are excited to be able to make that software widely available to the public with the release of our new software catalog.”

The catalog will initially be available at But some government-only code could include some items that are restricted by their nature like guidance and navigation systems.

“NASA is committed to the principles of open government,” the site added. “By making NASA resources more accessible and usable by the public, we are encouraging innovation and entrepreneurship. Our technology transfer program is an important part of bringing the benefit of space exploration back to Earth for the benefit of all people.”

Back in February, DARPA, the agency for military research projects of the U.S. Department of Defense, opened such a catalog. The research of the DARPA relates not only to the research of new weapons systems and the theory but practice of computer science.

Space programs join open source community, NASA releases source code catalogue is a post from: DevOpsANGLE

]]> 0
Docker containers bring flexibility, agility to application deployment | #RHSummit Wed, 16 Apr 2014 00:17:31 +0000 Continue reading

Docker containers bring flexibility, agility to application deployment | #RHSummit is a post from: DevOpsANGLE

dockerSolomon Hyke, Founder and CTO of Docker, discussed the company’s latest developments and their collaboration with Red Hat with theCUBE co-hosts John Furrier and Stu Miniman, live from this year’s Red Hat Summit. “Red Hat is committing to supporting Docker in the future,” Hyke said, explaining how the company offered a jump start program to train some of their customers in deploying Docker services.

Docker currently has 30 employees. They changed their name from DotCloud six months ago and raised 1$5 million in a series B round, bringing their total raised amount to about $26 million. Docker used to be just a platform that hosts applications online and runs them, but has added container and deployment technology, which explains the need to change their name.

Asked why containers were so hot, Hyke explained that “it starts with the application.”

“It does start with the software and what you want the software to do,” said Hyke, noting this approach leads to building the architecture it needs. “I like to think of the container as the Lego brick that makes the architecture possible. A container is a unit of deployment,” he said, defining the way you package your application for deployment.

“Our goal is for Docker to be available and ready to use on all major platforms,” Hyke said. Docker came out from Platform-as-a-Service (PaaS), so it is very connected. According to Hyke, PaaS is a specialized way to use containers, explaining that “if you use containers, it allows you to be more flexible down the road.”

Docker’s market positioning & future


Asked about comment on Docker’s market competition, Hyke said “Docker gets compared with a lot of tools in the DevOps world. The answer is the same for all: Docker is not a direct replacement for any of those, you can use them together. Docker does its own thing, it’s a container engine.”

Docker is currently at version 0.10 and releases a new version every month. It is not yet recommended for production use. “The next release will be the first release candidate. People are ignoring that and using it in production,” some using it on thousands of servers, Hyke warned. He went on to note some of the remaining obstacles in open source development and enterprise cloud adoption.

“The applications that are built today are being built for a platform that no one can point to,” Hyke said. “It’s out there, it’s not standardized. we’re at the same phase for the cloud that personal computer programmers were at in the 70s. There’s a frenzy for everyone to participate and build in it.”

Describing the Docker culture, Hyke said “we like to build things, typically we’re the kind of engineers that get obsessed about the tools. To build good software, you need to invest a bit of time into the tools. We want the tools to be awesome.”

Docker containers bring flexibility, agility to application deployment | #RHSummit is a post from: DevOpsANGLE

]]> 0
First phase of TrueCrypt audit finds the software is as secure as like its encryption solution Tue, 15 Apr 2014 18:55:43 +0000 Continue reading

First phase of TrueCrypt audit finds the software is as secure as like its encryption solution is a post from: DevOpsANGLE

TrueCrypt.jpgThe first phase of the TrueCrypt security audit has come back with good results: security researchers have not found any evidence of backdoors or major security issues in the popular encryption software. This is only the first iteration of the audit, it covers the binary and source code, the next will examine the cryptographic engine and its overall security.

Last year, researchers called for a full audit of open source encryption solution, TrueCrypt, as a bid to find whether the US National Security Agency (NSA) had attempted to weaken encryption standards and had planted backdoors in the encryption software.

As that time, TrueCrypt denied it has implemented a backdoor in its software, and that TrueCrypt only allows decryption with the correct password or key. Cryptography researchers Kenneth White and Matthew Green and others from iSEC, the company contracted to review the bootloader and Windows kernel driver, have been given the responsibility to find out the truth.

The first phase of the results, published in a PDF file, found that the official binary assembly does not contain hidden features and is identical to the supplied source code.

The first stage, which implies meticulous study of source boot loader and Windows-based driver, went on for seven months. However, there were 11 problems found in software, but it seems that inadvertently ended up in the code and nothing to do with the security loopholes. Most issues were of severity Medium (four found) or Low (four found), with an additional three issues having severity Informational (pertaining to Defense in Depth). The researchers propose that both the boot loader and the kernel driver does not meet the expected standards for safety code requirements.

“Overall, the source code for both the bootloader and the Windows kernel driver did not meet expected standards for secure code. This includes issues such as lack of comments, use of insecure or deprecated functions, inconsistent variable types, and so forth,” the report says. “In contrast to the TrueCrypt source code, the online documentation available at does a very good job at both describing TrueCrypt functionality and educating users on how to use TrueCrypt correctly. This includes recommendations to enable full disk encryption that protects the system disk, to help guard against swap, paging, and hibernation-based data leaks.

“The team also found a potential weakness in the Volume Header integrity checks. Currently, integrity is provided using a string (“TRUE”) and two (2) CRC32s. The current version of TrueCrypt utilizes XTS2 as the block cipher mode of operation, which lacks protection against modification; however, it is insufficiently malleable to be reliably attacked. The integrity protection can be bypassed, but XTS prevents a reliable attack, so it does not currently appear to be an issue. Nonetheless, it is not clear why a cryptographic hash or HMAC was not used instead.”

Most importantly, however, it did not reveal the presence of any security issues deliberately introduced into the tool itself or that massively compromise its security. It is a matter particularly urgent today, in light of the extremely dangerous Heartbleed bug, discovered in OpenSSL library that has persisted apparently undetected since 2011.

The iSEC team recommend that the Windows build environment is updated because it depends a great deal on tools and software packages that are difficult to obtain from trustworthy sources. Once this is done, all binaries, with all security features enabled, should be rebuilt.

The developers’ community took to Reddit to praise the TrueCrypt development team. The community said this is a great first step in bringing the profession of software development into line with other, more established disciplines in fields of science, engineering and financial audit. The public peer review process is facilitated by open source codebase and tools like Github that merge version control with change review/attribution.

Phase two of the audit focuses on cryptanalysis. The various cryptographic methods that are integrated into the software will be examined more closely during second phase. This second phase will also probably take several months to complete.

The timing on the release of this report couldn’t have been better. After the Heartbleed exploit broke, security experts have raised concerns about independently developed open source security products.

First phase of TrueCrypt audit finds the software is as secure as like its encryption solution is a post from: DevOpsANGLE

]]> 0
Windows Phone 8.1 download now available Tue, 15 Apr 2014 12:14:45 +0000 Continue reading

Windows Phone 8.1 download now available is a post from: DevOpsANGLE

small__9849929196Microsoft has revealed what new features Windows Phone 8.1 will deliver once it finally rolls out to consumers. Though it doesn’t seem like the new version of the OS will bring much, this is a major update since the software was last revamped 18 months ago.

Windows Phone 8.1 delivers Cortana, Microsoft’s answer to Apple’s Siri; a new notification center; and more customization features to make home screens stand out.

Developers will be the first to get the new software version as it starts rolling out this week. They can download the update on their Windows Phone 8 now. As for non-developers, they can either wait for their carrier’s over-the-air update, or head to the Windows Phone App Studio site, sign in with their Microsoft account and create a project by signing up for free. Once that’s done, it’s possible to download a special preview app. The special preview app will ask for you to sign in using your Microsoft account to allow the device to detect the Windows Phone 8.1 update.

Windows Phone 8.1 is slated to be released to the general public this April, but one feature will not be available in all regions. Cortana will only be rolled out in US, followed by the UK and China later this year. The rest of the world will have to wait until 2015, which is quite a long time. Microsoft promises a more friendly first-time user experience with Windows Phone 8.1, which the company hopes will entice more users to consider it instead of iOS or Android.

The announcement was made by Joe Belfiore, Microsoft’s corporate vice president and manager for Windows Phone Program Management, on Twitter, enticing developers with a screenshot.

A word of caution


Windows Phone 8 users should be aware that there are limitations that come with updating your phone this way. First off, your device’s warranty will be considered void until your network carrier officially rolls out the Windows Phone 8.1 update and any associated carrier or device customizations.

This means all you’ll get are the features of Windows Phone 8.1, but not those being rolled out by device manufacturers like Nokia or network carriers like AT&T. If you encounter problems with the unofficial update, you will not be able to revert your phone back to the previous software version, which means you’ll be stuck with the buggy software until the official update rolls out. One final reason you might want to hold off on updating their device is that the developer preview of Windows Phone 8.1 will not include the custom lock screen support previewed at Build 2014.

photo credit: Nicola since 1972 via photopin cc

Windows Phone 8.1 download now available is a post from: DevOpsANGLE

]]> 0
Microsoft pitches ‘write once, run anywhere’ cross platform universal development with Visual Studio 2013 update Wed, 09 Apr 2014 19:21:41 +0000 Continue reading

Microsoft pitches ‘write once, run anywhere’ cross platform universal development with Visual Studio 2013 update is a post from: DevOpsANGLE

visual-studio-2013To support “write once, run anywhere” application development on new Windows Phone 8.1 and Windows 8.1 Update 1, Microsoft announced the release of the Release Candidate of Visual Studio 2013 Update 2. This update aligns with recent Microsoft products and supports the new features of Windows 8.1 Update 1 and Windows Phone 8.1.

Last November, Microsoft launched Visual Studio 2013 and committed to delivering more to developers with Azure integration. Compared with the first update, it includes a number of functional innovations and most notably “universal Windows apps” development, which uses the Windows Runtime to build apps that will run on Windows Phone 8.1, Windows 8.1 and even Xbox One.

Microsoft also wants its Visual Studio to embrace Azure. This means that developers can conveniently add or remove instances or servers as they require, without needing to exit the Visual Studio environment. For developers who are already building apps on the Azure platform, VS 2013 portal will give them an end-to-end solution for almost all of the tools they need to develop, deploy and manage their apps.

Universal Apps for Windows and Windows Phone

With an update of Visual Studio 2013 to the Update 2 RC development environment Microsoft gives developers the ability to use to optimize an app for all Windows platforms–whether PC, Smartphone or Tablet. In Visual Studio, Microsoft introduced a notion of shared projects for C#, C++ and JavaScript, making sharing code and assets between the Windows and Windows Phone heads of the same app as easy as possible.

In addition to new features for productivity and collaboration, the update includes the “Shared Projects” that allow you to develop an app suitable for smartphones, tablets and PCs. Developers can decide to make the app only once and can be installed on multiple devices. The universal app will be identified with an icon depicting a smartphone superimposed on a computer. Developers are free to seek pay the one-time download of the app or any application on a device by the customer.

Universal app Windows will reduce development time, as developers will only need to make small changes to the interface, integrate support for mouse and keyboard on a desktop, and controller and Kinect on Xbox One.

“Universal projects allow developers to use approximately 90 percent of the same code, a single packaging system, and a common user interface to target apps for phones, tablets and PCs,” Microsoft said in a press release.

New tools and APIs for building seamless apps

The suite of tools in Visual Studio for Windows Store development can all be used during development, debugging and diagnostics for Windows Phone 8.1 projects. VS 2013 Update 2 introduces new APIs that allow developers to easily create applications that target Windows platforms. Windows Phone applications can now use Windows Runtime.

With Visual Studio 2013, it will be possible to create WinRT applications for Windows Phone using C#/XAML, C++/XAML, C++/DirectX and JavaScript/HTML. API Speech has been updated to integrate with the new voice assistant Cortana, which was introduced in Windows Phone. Developers can use to integrate all the services offered by Cortana in their applications, including the triggering of certain actions via voice commands.

Microsoft also introduced a new CPU Usage diagnostic tool that lets developers do real-time monitoring of the operation of the processor during the execution of programs. The update also has a Memory Usage tool to allow developers to monitor live the way their applications consume memory. These diagnostic tools (profiler memory, responsiveness of the user interface, consumption energy) can be used during development and debugging Windows Phone 8.1 applications.

Microsoft also announced the publication of the pre-release of .NET Native Compilation, which combines the productivity of C# and .NET, with performance characteristics of the native code. The .NET Native is a new compiler optimizations that leverages the C+ + compiler from Microsoft to produce native image with gains in startup time, memory usage and performance.

This update also includes the final Typescript 1.0, the superset of JavaScript developed by the open source division of Microsoft. The Visual Studio 2013 Update 2 RC also contains update to NuGet version 2.8.1, Entity Framework 6.1, as well as many other updates, fixes and improvements to the Windows Phone 8.1 platform, ASP.NET technology and Windows Azure.

Visual Studio Online

At Build, Microsoft announced that Visual Studio Online, a Web-based version of Visual Studio, had exited a period of testing and was now available to all comers. Microsoft promised to be a high degree of reliability with the GA release and said Visual Studio Online comes with an SLA of 99.9 percent for account services and functionalities.

As the name might suggest, Visual Studio Online is not a complete online version of the Visual Studio development environment. Rather, it is a distributed version control system and the successor to the Team Foundation Service. The online development tool focuses primarily on projects in the cloud and was launched in November as a preview version.

Microsoft says the online cloud tool would provide everything a DevOps team needs to do modern cloud based development. It has the capabilities to provision dev and test resources, development and collaboration capabilities, build, release and deployment capabilities, application telemetry and management capabilities and more.

Microsoft pitches ‘write once, run anywhere’ cross platform universal development with Visual Studio 2013 update is a post from: DevOpsANGLE

]]> 0
Twitter is developing multi-tenancy database to support 6,000 tweets per second Tue, 08 Apr 2014 22:27:59 +0000 Continue reading

Twitter is developing multi-tenancy database to support 6,000 tweets per second is a post from: DevOpsANGLE

twitter bird caged cloudAs Twitter is now considered the global platform for public conversation, the company’s storage requirements have grown as well. In recent days, Twitter experiences something on order of 5,000 to 10,000 tweets a second (see the Twitter blog) from more than 240 million Twitter accounts (Twitter’s investor report.)

Over the last few years, the company found itself in need of a storage system that could serve millions of queries per second, with extremely low latency in a real-time environment. Availability and speed of the system became the utmost important factor. Not only did it need to be fast; it needed to be scalable across several regions around the world.

This is why at this time, an improvement plan the IT architecture in the long term is drawn. The company prioritizes infrastructure investments around enhancing the simplicity, scalability and performance of its database portfolio.

The Manhattan database system


Last week, Twitter published a blog post detailing its Manhattan database system, built to power a wide variety of applications. The distributed, real-time database was designed to serve multiple teams and applications within the company that existing technologies can no longer handle.

The Manhattan database system is built to cope with the roughly 6,000 tweets, retweets per tweet and replies that flood into its system every second. Manhattan was also built in-house based on Twitter’s internal needs and the company felt as if it could not do that without building a new system from the ground up to operate reliably at scale. The company currently uses open source databases MySQL and Cassandra to run its massive online empire.

“We were spending far too much time firefighting production systems to meet the performance expectations of our various products, and standing up new storage capacity for a use case involved too much manual work and process. Our experience developing and operating production storage at Twitter’s scale made it clear that the situation was simply not sustainable,” Twitter’s program operative Peter Schuller wrote on its website.

Manhattan, which began development two years ago, is designed to be an all-in-one solution. Twitter’s Manhattan storage service is meant to be consumed just like any other cloud storage service. Currently, it uses key-value store to interact with users, but Twitter is looking to scale the database by introducing a graph-based capability.

The architecture


hadoop_logo.jpgThe database system consists of three storage engines that are designed for read-only Hadoop data, write-heavy and read-heavy data, respectively. The engines are powered by numerous services including strong consistency service, allowing customers to have strong consistency when doing certain sets of operations; time-series counters service to handle high volume time-series counters in Manhattan; and for importing Hadoop data, ensuring strong consistency and counting time-series data.

The data captured by the social network is stored on three different systems. The first system is called seadb, which is a read only file format; the second on is sstable, which is a log-structured merge tree for heavy-write workloads; and lastly btree, which is a heavy-read and light-write system.

Manhattan automatically matches the incoming data based on the file format. The output of the workloads is then fed into its Hadoop File System, and Manhattan transforms that information into seadb files so they can then be imported into the cluster for fast serving from SSDs or memory.

Developers can select the consistency of data when reading from or writing to Manhattan, allowing them to create new services with varying tradeoffs between availability and consistency. The company also developed internal APIs to expose this data for cost analysis, which allows developers to determine what use cases are costing the business the most, as well as which ones aren’t being used as often.

“Engineers can provision what their application needs (storage size, queries per second, etc.) and start using storage in seconds without having to wait for hardware to be installed or for schemas to be set up,” Schuller wrote.

Looking ahead


As for further development, Twitter plans to release a white paper outlining even more technical details on Manhattan. The company is also working to implement secondary indexes that will help developers add an additional set of range keys to the index for a database. The secondary indexes will further speed the database queries, and developers can navigate through large amounts of data.

“The challenges are increasing and the number of features being launched internally on Manhattan is growing at rapid pace. Pushing ourselves harder to be better and smarter is what drives us on the Core Storage team,” writes Peter Schuller by way of conclusion about the Manhattan database. See Twitter’s full rundown of the database for further reading.

Given a company’s gusto for open source, it wouldn’t be startling if it open sourced Manhattan in the coming days. The company recently contributed code to Facebook’s WebScaleSQL open source project to create the perfect database designed to scale to massive proportions.

photo credit: mkhmarketing via photopin cc

Twitter is developing multi-tenancy database to support 6,000 tweets per second is a post from: DevOpsANGLE

]]> 0
AWSSummit DevOps Round up: Composite application development accelerating DevOps 2.0 Model Tue, 08 Apr 2014 21:31:21 +0000 Continue reading

AWSSummit DevOps Round up: Composite application development accelerating DevOps 2.0 Model is a post from: DevOpsANGLE

aws-devopsWhile the cloud allows most organizations easy access to nearly unlimited compute power, it also creates significant complexity and scaling challenges. Approaching these issues with the same IT strategy of years past will fail. Simply put, companies need to first and foremost be open to the idea of change–from team hierarchies to cloud deployments to automation tools.

With an open mind-set, only then can you change your practices to embrace DevOps principles and align development and operations on the same end goal. At this year’s AWS Summit, Amazon’s foothold into the enterprise, as developers, IT, marketing and other departments utilize its cloud offerings for a myriad of projects.

DevOps is really about change, both cultural and tooling change, and the industry winner will need to appeal to developers, and succeed in automation for proper integration of its cloud solutions. SiliconANGLE’s editor-in-chief John Furrier thinks that AWS is “still not ready for prime time,” but Amazon is clearly disrupting an entire industry with its AWS solutions, and making the right moves.

Amazon’s take on DevOps from the Amazon Kinesis team


John Furrier and Jeff Frick, theCUBE co-hosts, interviewed  Aditya Krishnan, AWS Sr. Product Manager for Kinesis, and Ryan Waite, General Manager of Data Services at Amazon Web Services,  at AWS Summit 2014 in San Francisco to talk about how Amazon helps solve DevOps problems by providing a fully managed service that takes care of all the heavy lifting for developers, providing easy data ingestion and storage, high data durability and the ability to scale seamlessly from kilobytes to terabytes an hour.

Furrier describes Amazon Kinesis as the full real-time processing of large data streaming service, facilitating the development of applications dealing with real-time data. He added that Amazon has added extra features to Kinesis, which is helping developers to scale data up and down as needed. Furrier inquired about the new developments going on around Kinesis world.

Waite said Kinesis can store and process terabytes of data an hour from hundreds of thousands of sources. Data is replicated across multiple availability zones to ensure high durability and availability.

Waite added that in terms of use and usefulness, Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources. This means that developers will be able to write applications that process information in real-time from sources (such as website click-streams or sensors for the Internet of Things) that handle social media, operational logs, metering data or any other data.

Furrier stated that the “use of data is shifting new development paradigm. But no one really come out yet on a development kit or developer framework for data.”

He then asked of Amazon’s approach of data framework for developer?

Krishnan said, “most big data processing has been done through batch-oriented approaches such as those used by Hadoop, or through database technologies such as data warehouses. To build applications that rely on this fast-moving data, many companies have developed their own systems or stitched together open source tools, but these are often complex to build, difficult to operate, inelastic and hard to scale and can be unreliable or lose data. Amazon Kinesis helps solve these problems by providing easy data ingestion and storage, high data durability and the ability to scale seamlessly.”

Commenting on the composite application development across team in implementing DevOps, Furrier asked if is this DevOps 2.0. “What’s next after DevOps? What Amazon’s take on next DevOps model?”

“DevOps is a great model, it really worked for number of startup companies. Amazon’s ability to take data from Kinesis and pump that right into Elastic MapReduce makes it easy for people to use their existing applications with the new system like Kinesis,” said Waite. “That kind of composing of applications accelerate DevOps and Amazon is continue to do so more and more kind of work,” he added.

Furrier asked what kind of challenges and issues Amazon faced adopting this model in terms of use and usefulness. Waite said initially Kinesis used to collect and process hundreds of terabytes of data in around 100 milliseconds. But that is high SLA time for customers. As data streams in, the team reduces the data upload to 30 to 40 millisecond.

With Amazon Kinesis, customers can quickly and easily add real-time analytics and other functionality to their applications, turning today’s explosive data growth into an opportunity to build competitive advantage and innovate for their customers, he added.

Amazon Workspaces for developers’ benefit


Mark Nunnikhoven, Vice President, Cloud & Emerging Technologies at Trend Micro, spoke to Jeff Frick in theCUBE at AWS Summit about cloud security, and how Amazon WorkSpaces is helping developers adopt cloud when it comes to the enterprise. Nunnikhoven was also happy to announce that Trend Micro received pre-approved status for AWS.

For the most part, IT wants to deliver something easy that allows ops and dev to work at their own pace and in their own space without having to worry much about proprietary information and security. As a security provider, Trend Micro wants to make sure they provide tools that enable people to have that access. It offers products and guidance through professional services. Virtualized desktops able to run across multiple devices and host themselves in the same space do a lot to ease security concerns for both in-house and mobile development.

“Your data now lives in the AWS cloud and [Amazon] lets you access it from any of those devices, but the data always stays in that one place, so they’re trying to solve that problem of data everywhere by giving you access everywhere…” says Nunnikhoven. “Distributing access, if you don’t give your users access where they want it and when they want it, they’ll route around you… As a security provider we want to make sure that we’re providing tools that enable people to provide that access.”

He added that the AWS Summit is mainly for developers, or operations folks, or DevOps folks. They know they want secure applications, but at the end of the day they’re responsible for delivering a social app or a mobile game. Trend Micro is providing them a way to offload security–thus providing developers-in-the-trenches protection without getting in their way.

AWS is educating developers


John Furrier and Jeff Frick, theCUBE co-hosts, interviewed Rochana Golani, Global Head of Training Curriculum and Certification with AWS. Golani said Amazon offers instruction-led training and YouTube videos on subjects for self-paced developers. AWS offers training to help them learn to design, develop and operate secure and efficient applications, available on the AWS cloud.

Furrier prodded Golani to elaborate on the lifecycle and details of the certification program. Golani said AWS training courses are designed for system architect, sysops administrator and developers.

“It’s giving the developers the flexibility to pick the language that they want or the programming framework that they are interested in and learning around them,” she said.

Golani said Amazon is offering things which allow these individuals to gain skills in order to be able to design cloud based solutions. Regardless of the skills of the students, Amazon trained tens of thousands of customers and individuals since last year on topic ranging from cloud computing, big data, AWS cloud, big data analytics on AWS and others.

AWSSummit DevOps Round up: Composite application development accelerating DevOps 2.0 Model is a post from: DevOpsANGLE

]]> 0