Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

New Azure Firewall certification and features in Q1 CY2020

$
0
0

This post was co-authored by Suren Jamiyanaa, Program Manager, Azure Networking

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are excited to share several new Azure Firewall capabilities based on your top feedback items:

  • ICSA Labs Corporate Firewall Certification.
  • Forced tunneling support now in preview.
  • IP Groups now in preview.
  • Customer configured SNAT private IP address ranges now generally available.
  • High ports restriction relaxation now generally available.

Azure Firewall is a cloud native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

ICSA Labs Corporate Firewall Certification

ICSA Labs is a leading vendor in third-party testing and certification of security and health IT products, as well as network-connected devices. They measure product compliance, reliability, and performance for most of the world’s top technology vendors.

Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification. For the Azure Firewall certification report, see information here. For more information, see the ICSA Labs Firewall Certification program page.
Front page of the ICSA Labs Certification Testing and Audit Report for Azure Firewall.

Figure one – Azure Firewall now ICSA Labs certified.

Forced tunneling support now in preview

Forced tunneling lets you redirect all internet bound traffic from Azure Firewall to your on-premises firewall or a nearby Network Virtual Appliance (NVA) for additional inspection. By default, forced tunneling isn't allowed on Azure Firewall to ensure all its outbound Azure dependencies are met.

To support forced tunneling, service management traffic is separated from customer traffic. An additional dedicated subnet named AzureFirewallManagementSubnet is required with its own associated public IP address. The only route allowed on this subnet is a default route to the internet, and BGP route propagation must be disabled.

Within this configuration, the AzureFirewallSubnet can now include routes to any on-premise firewall or NVA to process traffic before it's passed to the Internet. You can also publish these routes via BGP to AzureFirewallSubnet if BGP route propagation is enabled on this subnet. For more information see Azure Firewall forced tunneling documentation.


Creating a firewall with forced tunneling enabled

Figure two – Creating a firewall with forced tunneling enabled.

IP Groups now in preview

IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. For more information, see the IP Groups in Azure Firewall documentation.

Azure Firewall application rules utilize an IP group

Figure three – Azure Firewall application rules utilize an IP group.

Customer configured SNAT private IP address ranges

Azure firewall provides automatic Source Network Address Translation (SNAT) for all outbound traffic to public IP addresses. Azure Firewall doesn’t SNAT when the destination IP address is a private IP address range per IANA RFC 1918. If your organization uses a public IP address range for private networks or opts to force tunnel Azure Firewall internet traffic via an on-premises firewall, you can configure Azure Firewall to not SNAT additional custom IP address ranges. For more information, see Azure Firewall SNAT private IP address ranges.

Azure Firewall with custom private IP address ranges

Figure four – Azure Firewall with custom private IP address ranges.

High ports restriction relaxation now generally available

Since its initial preview release, Azure Firewall had a limitation that prevented network and application rules from including source or destination ports above 64,000. This default behavior blocked RPC based scenarios and specifically Active Directory synchronization. With this new update, customers can use any port in the 1-65535 range in network and application rules.

Next steps

For more information on everything we covered above please see the following blogs, documentation, and videos.

Azure Firewall central management partners:


5 steps for boosting your digital transformation with Microsoft 365

Announcing the new Bing Webmaster Tools

$
0
0
Over the last few months, we have heard from the webmaster ecosystem that Bing Webmaster Tools user interface is slow and outdated. With our user first focus, we have taken your feedback and have been working on modernizing the tools.  We are delighted to announce the first iteration of refreshed Bing Webmaster Tools portal.



The refreshed portal is being built with key principles of - keeping the design simple with the tools being Faster, Cleaner, more Responsive and Actionable.

We have updated the backend datastore to improve the data extraction and redesigned the user experience to make it more user friendly and intuitive. Keeping the need of users in mind, the portal is also device responsive so that it provides the flexibility to the users to access it across devices.

In the first iteration, the new portal will have 3 key features,
  1. Backlinks - The Inbound Links report in the current portal is integrated with the Disavow links tool to become the new Backlinks report in the refreshed portal.
  2. Search Performance - Page Traffic and Search Keywords reports are also integrated as one and are a part of the new Search Performance report.
  3. Sitemaps - The Sitemaps page is the refreshed Sitemaps page of the current portal
We are releasing the new portal to a select set of users this week and will be rolling out to all users by 1st week of march. To access the new portal, sign-in to Bing Webmaster Tools and navigate to Sitemaps, Inbound Links, Page Traffic or Search Keywords reports and click on the links to open the new portal.


Over the next few months, we will focus on moving all the functionalities to the new portal. During the transition, the users will be able to use the current and new pages simultaneously for a short period. We will be deprecating the functionality from the old portal in a few weeks immediately after its inclusion in the new portal. We will strive to make this transition seamless and exciting for our users.

The Bing Webmaster APIs will stay the same so users using our webmaster API to get their data programmatically do not have to make any changes.

Reach out to us and share feedback on Twitter and Facebook and let us know how you feel about the new Bing Webmaster Tools. If you encounter any issues, please raise a service ticket with our support team.

Regards,
The Bing Webmaster Tools team
 

The new Office app now generally available for Android and iOS

$
0
0

A few months ago, we introduced a new mobile app called Office—a whole new experience designed to be your go-to app for getting work done on a mobile device. It combines Word, Excel, and PowerPoint into a single app and introduces new capabilities that enable you to create content and accomplish tasks in uniquely mobile…

The post The new Office app now generally available for Android and iOS appeared first on Microsoft 365 Blog.

Azure Security Center for IoT RSA 2020 announcements

$
0
0

We announced the general availability of Azure Security Center for IoT in July 2019. Since then, we have seen a lot of interest from both our customers and partners. Our team has been working on enhancing the capabilities we offer our customers to secure their IoT solutions. As our team gets ready to attend the RSA conference next week, we are sharing the new capabilities we have in Azure Security Center for IoT.

As organizations pursue digital transformation by connecting vital equipment or creating new connected products, IoT deployments will get bigger and more common. In fact, the International Data Corporation (IDC) forecasts that IoT will continue to grow at double-digit rates until IoT spending surpasses $1 trillion in 2022. As these IoT deployments come online, newly connected devices will expand the attack surface available to attackers, creating opportunities to target the valuable data generated by IoT. Organizations are challenged with securing their IoT deployments end-to-end from the devices to applications and data, also including the connections between the two.

Why Azure Security Center for IoT?

Azure Security Center for IoT provides threat protection and security posture management designed for securing entire IoT deployments, including Microsoft and 3rd party devices. Azure Security Center for IoT is the first IoT security service from a major cloud provider that enables organizations to prevent, detect, and help remediate potential attacks on all the different components that make up an IoT deployment—from small sensors, to edge computing devices and gateways, to Azure IoT Hub, and on to the compute, storage, databases, and AI or machine learning workloads that organizations connect to their IoT deployments. This end-to-end protection is vital to secure IoT deployments.

Added support for Azure RTOS operating system

Azure RTOS is a comprehensive suite of real-time operating systems (RTOS) and libraries for developing embedded real-time IoT applications on multi control unit (MCU) devices. It includes Azure RTOS ThreadX, a leading RTOS with the off-the-shelf support for most leading chip architectures and embedded development tools. Azure Security Center for IoT extends support for Azure RTOS operating system in addition to Linux (Ubuntu, Debian) and Windows 10 IoT core operating systems. Azure RTOS will be shipped with a built-in security module that will cover common threats on real-time operating system devices. The offering includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of the device.

New Azure Sentinel connector

As information technology, operational technology, and the Internet of Things converge, customers are faced with rising threats.

Azure Security Center for IoT announces the availability of an Azure Sentinel connector that provides onboarding of IoT data workloads into Sentinel from Azure IoT Hub-managed deployments. This integration provides investigation capabilities on IoT assets from Azure Sentinel allowing security pros to combine IoT security data with data from across the organization for artificial intelligence or advanced analysis. With Azure Sentinel connector you can now monitor alerts across all your IoT Hub deployments, act upon potential risks, inspect and triage your IoT Incidents, and run investigations to track attacker's lateral movement within your network.

With this new announcement, Azure Sentinel is the first security information and event management (SIEM) with native IoT support, allowing SecOps and analysts to identify threats in the complex converged networks.

Microsoft Intelligent Security Association partnership program for IoT security vendors

Through partnering with members of the Microsoft Intelligent Security Association, Microsoft is able to leverage a vast knowledge pool to defend against a world of increasing IoT threats in enterprise, healthcare, manufacturing, energy, building management systems, transportation, smart cities, smart homes, and more. Azure Security Center for IoT's simple onboarding flow connects solutions, like Attivo Networks, CyberMDX, CyberX, Firedome, and SecuriThings—enabling you to protect your managed and unmanaged IoT devices, view all security alerts, reduce your attack surface with security posture recommendations, and run unified reports in a single pane of glass.

For more information on the Microsoft Intelligent Security Association partnership program for IoT security vendors check out our tech community blog.

Availability on government regions

Starting on March 1, 2020, Azure Security Center for IoT will be available on USGov Virginia and USGov Arizona regions.

Organizations can monitor their entire IoT solution, stay ahead of evolving threats, and fix configuration issues before they become threats. When combined with Microsoft’s secure-by-design devices, services, and the expertise we share with you and your partners, Azure Security Center for IoT provides an important way to reduce the risk of IoT while achieving your business goals.

To learn more about Azure Security Center for IoT please visit our documentation page. To learn more about our new partnerships please visit the Microsoft Intelligent Security Association page. Upgrade to Azure Security Center Standard to benefit from IoT security.

Leverage AI and machine learning to address insider risks

Microsoft Office 365 and Dynamics 365 now available from new German datacenter regions

Simplifying Microsoft Edge configuration profiles for Jamf Pro

$
0
0

With our Microsoft Edge Beta Channel 81 release, we’re excited to announce preview support for Jamf Pro’s 10.19 changes to the Application & Custom Settings menu. This feature lets IT Admins paste a custom JSON policy manifest for creating configuration profiles directly in Jamf Pro instead of composing and uploading a plist file.

To learn about this feature from Jamf, read more about it at https://jamf.it/computer-configuration-profiles.

What is Jamf Pro’s Application & Custom Settings menu?

Before Jamf Pro 10.18, managing Office 365 involved manually building a .plist file. This was a time-consuming workflow that required a strong technical background. Jamf Pro 10.18 eliminated those barriers by streamlining the configuration process. However, IT Admins could only use this new user interface for specific applications and preference domains specified by Jamf.

In Jamf Pro 10.19, an admin can upload a JSON manifest as a “custom schema” to target any preference domain, and the graphical user interface will be generated from this manifest. The custom schema that’s created follows the JSON Schema specification.

Why are we supporting this feature?

We’re committed to giving customers the optimal experience for managing Microsoft Edge. Supporting the update to Jamf Pro’s Application & Custom Settings menu removes the manual plist operation. This means that our users will get the most streamlined and error free experience for managing Microsoft Edge in Jamf Pro.

How to get the policy manifest for a specific version of Microsoft Edge

To get the policy manifest:

  • Go to the Microsoft Edge Enterprise landing page.
  • On the CHANNEL/BUILD dropdown list, select Beta 81.*.
  • On the PLATFORM dropdown list, select macOS 64 bit.
  • Click GET POLICY FILES to download our policy templates bundle. (Note: Currently, the policy templates bundle is signed as a CAB file. You’ll need to use a 3rd party tool, such as The Unarchiver, to open the file on macOS.)

After you unpack the CAB file, navigate to the “mac” top level directory. The manifest, which is named “policy_manifest.json”, is in the “mac” directory.

This manifest will be published in every policy bundle starting with 81.0.416.3. If you want to test policies in the Dev channel, you can take the manifest associated with each Dev release and test it in Jamf Pro.

You can learn more about using the policy manifest in Jamf Pro in our documentation here.

Share your feedback

We’d love to get your feedback about using our manifest with Jamf Pro. The best places to contact us is on our Enterprise community forum, on Twitter at @MSEdgeDev, or come to our #microsoft-edge channel in the MacAdmins community.

At the same time, our manifest is in preview, we recognize that this feature is new for Jamf as well. If you have feedback or problems that you’d like to communicate to Jamf, you can email support@jamf.com, or contact your account support specialist.

– Gray Houston, Software Engineer, Microsoft Edge

The post Simplifying Microsoft Edge configuration profiles for Jamf Pro appeared first on Microsoft Edge Blog.


How to install Visual Studio Code on a Raspberry Pi 4 in minutes

$
0
0

Four years ago I wrote how to BUILD (literally compile) Visual Studio Code for a Raspberry Pi ARM machine. Just a few months later in November, community member Jay Rodgers released his labor of love - nightly builds of VS Code for Chromebooks and Raspberry Pi.

If you want to get unofficial builds of Visual Studio Code running on a Raspberry Pi (I know you have one!) you should use his instructions. He has done a lot of work to make this very simple. Head over to http://code.headmelted.com/ and make it happen for yourself, now!

Jay says:

I've maintained the project for a few years now and it has expanded from providing binaries for Pi to providing support and tools to get VS Code running on low-end ARM devices that might not otherwise support it like Chromebooks (which make up about 60% of the devices in schools now).

The project has really taken off among educators (beyond what I would have thought), not least because they're restricted to the devices provided and it gives them a route to teach coding to students on these computers that might not otherwise be there.

Again, Jay is doing this out of love for the community and the work that makes it happen is hosted at https://github.com/headmelted/codebuilds. I'd encourage you to head over there right now and give him a STAR.

There's so many community members out there doing "thankless" work. Thank them. Thank them with a thank you email, a donation, or just your kindness when you file an issue and complain about all the free work they do for you.

I just picked up a Raspberry Pi 4 from Amazon, and I was able to get a community build of VS Code running on it easily!

Open a terminal, run "sudo -s" and then this script (again, the script is open source):

. <( wget -O - https://code.headmelted.com/installers/apt.sh )

Jay has done the work! That's just the apt instructions, but he's got Chrome OS, APT, YUM, and a manual option over at http://code.headmelted.com/!

Thank you for making this so much easier for us all.

Visual Studio Code on a Raspberry Pi 4

Love Raspberry Pis? Here's some fun stuff you can do with the Raspberry that you bought, the one you meant to do fun stuff with, and the one in your junk drawer. DO IT!

Enjoy!


Sponsor: Couchbase gives developers the power of SQL with the flexibility of JSON. Start using it today for free with technologies including Kubernetes, Java, .NET, JavaScript, Go, and Python.



© 2019 Scott Hanselman. All rights reserved.
     

Preview of Active Directory authentication support on Azure Files

$
0
0

We are excited to announce the preview of Azure Files Active Directory (AD) authentication. You can now mount your Azure Files using AD credentials with the exact same access control experience as on-premises. You may leverage an Active Directory domain service either hosted on-premises or on Azure for authenticating user access to Azure Files for both premium and standard tiers. Managing file permissions is also simple. As long as your Active Directory identities are synced to Azure AD, you can continue to manage the share level permission through standard role-based access control (RBAC). For directory and file level permission, you simply configure Windows ACLs (NTFS DACLs) using Windows File Explorer just like any regular file share. Most of you may have already synced on-premises Active Directory to Azure AD as part of Office 365 or Azure adoption and are ready to take advantage of this new capability today.

When you consider migrating file servers to the cloud, many may decide to keep the existing Active Directory infrastructure and move the data first. With this preview release, we made it seamless for Azure Files to work with existing Active Directory with no change in the client environment. You can log into an Active Directory domain-joined machine and access Azure file share with a single sign-on experience. In addition, you can carry over all existing NTHS DACLs that have been configured on the directories and files over the years and have them continue to be enforced as before. Simply migrate your files with ACLs using common tools like robust file copy (robocopy) or orchestrate tiering from on-premises Windows file servers to Azure Files with Azure File Sync.

With AD authentication, Azure Files can better serve as the storage solution for Virtual Desktop Infrastructure (VDI) user profiles. Most commonly, you have set up the VDI environment with Windows Virtual Desktop as an extension of your on-premises workspace while continue to use Active Directory to manage the hosting environment. By using Azure Files as the user profile storage, when a user logs into the virtual session, only the profile of the authenticated user is loaded from Azure Files. You don’t need to set up a separate domain service for managing storage access control experience for your VDI environment. Azure Files provides you the most scalable, cost-efficient, and serverless file storage solution for hosting user profile data. To learn more about using Azure Files for Windows Virtual Desktop scenarios, refer to this article.

What’s new?

Below is a summary of the key capabilities introduced in the preview:

  • Enable Active Directory (Active Directory/Domain Services) authentication for server message block (SMB) access. You can mount Azure Files from Active Directory domain-joined machines either on-premises or on Azure using Active Directory credentials. Azure Files supports using Active Directory as the directory service for identity-based access control experience for both premium and standard tiers. You can enable Active Directory authentication on self-managed or Azure Files Sync managed file shares.
  • Enforce share level and directory or file level permission. The existing access control experience continues to be enforced for file shares enabled for Active Directory authentication. You can leverage RBAC for share-level permission management, then persist or configure directory or file level NTFS DACLs using Windows File Explorer and icacls tools.
  • Support file migration from on-premises with ACL persistence over Azure File Sync. Azure File Sync now supports persisting ACLs on Azure Files in native NTFS DACL format. You can choose to use Azure File Sync for seamless migration from on-premises Windows file servers to Azure Files. Existing files and directories tiered to Azure Files through Azure Files Syncs have ACLs persisted in the native format.

Get started and share your experiences

You can create a file share in the preview supported regions and enable authentication with your Active Directory environment running on-premises or on Azure. Here are the documentation links to the detailed guidance on the feature capabilities and step to step enablement.

As always, you can share your feedback and experience over email at azurefiles@microsoft.com. Post your ideas and suggestions about Azure Storage on our feedback forum.

A secure foundation for IoT, Azure Sphere now generally available

$
0
0

Today Microsoft Azure Sphere is generally available. Our mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. General availability is an important milestone for our team and for our customers, demonstrating that we are ready to fulfill our promise at scale. For Azure Sphere, this marks a few specific points in our development. First, our software and hardware have completed rigorous quality and security reviews. Second, our security service is ready to support organizations of any size. And third, our operations and security processes are in place and ready for scale. General availability means that we are ready to put the full power of Microsoft behind securing every Azure Sphere device.

The opportunity to release a brand-new product that addresses crucial and unmet needs is rare. Azure Sphere is truly unique, our product brings a new technology category to the Microsoft family, to the IoT market, and to the security landscape.

IoT innovation requires security

The International Data Corporation (IDC) estimates that by 2025 there will be 41.6 billion connected IoT devices. Put in perspective, that’s more than five times the number of people on earth today. When we consider why IoT is growing so rapidly, the astounding pace is being driven by industries and companies that are investing in IoT to pursue long-term, real-world impact. They’re looking to harness the power of the intelligent edge to make daily life effortless, to transform businesses, to create safer working and living conditions, and to address some of the world’s most pressing challenges.

Innovation, no matter how valuable, is not durable without a foundation of security. If the devices and experiences that promise to reshape the world around us are not built on a foundation of security, they cannot last. But when innovation is built on a secure foundation, you can be confident in its ability to endure and deliver value long into the future. Durable innovation requires future-proofing IoT investments by planning and investing in security upfront.

IoT security is complex and the threat landscape is dynamic. You have to operate under the assumption that attacks will happen, it's not a matter of if but when. With this in mind, we built Azure Sphere with multiple layers of protection and with continually improving security so that it’s possible to limit the reach of an attack and renew and enhance the security of a device over time. Azure Sphere delivers foundational security for durable innovation.

Security is complex, but it doesn’t have to be complicated

Many of the customers we talk to struggle to define the specific IoT security measures necessary for success. We’ve leveraged our deep Microsoft experience in security to develop a very clear view of what IoT security requires. We found that there are seven properties that every IoT device must have in order to be secured. These properties clearly outline the requirements for an IoT device with multiple layers of protection and continually improving security.

Any organization can use the seven properties as a roadmap for device security, but Azure Sphere is designed to give our customers a fast track to secured IoT deployments by having all seven properties built-in. It makes achieving layered, renewable security for connected devices an easy, affordable, no-compromise decision.

Azure Sphere is a fully realized security system that protects devices over time. It includes four components, three of which are powered by technology, the Azure Sphere-certified chips that go into every device, the Azure Sphere operating system (OS) that runs on the chips, and the cloud-based Azure Sphere Security Service.

Every Azure Sphere chip includes built-in Microsoft security technology to provide a dependable hardware root of trust and advanced security measures to guard against attacks. The Azure Sphere OS is designed to limit the potential reach of an attack and to make it possible to restore the health of the device if it’s ever compromised. We continually update our OS, proactively adding new and emerging protections. The Azure Sphere Security Service reaches out and guards every Azure Sphere device. It brokers trust for device-to-cloud and device-to-device communication, monitors the Azure Sphere ecosystem to detect emerging threats, and provides a pipe for delivering application and OS updates to each device. Altogether, these layers of security prevent any single point of failure that could leave a device vulnerable.

The fourth component may be the most important: our Azure Sphere team. These are some of the brightest minds in security and they’re dedicated to the security of every single Azure Sphere device. Our team is at work identifying emerging security threats, creating new countermeasures, and deploying them to every Azure Sphere device. We are fighting the security battle, so our customers don’t have to.

Security obsessed, customer-driven

The challenges of IoT device security that keep us up at night lead to the features and capabilities that give our customers peace of mind. It’s ambitious and demanding work. To realize the defense-in-depth approach we had to integrate multiple distinct technologies and their related engineering disciplines. Our team can’t think about any component in isolation. Instead, we work from a unified view of interoperability and dependencies that brings together our silicon, operating system, SDK, security services, and developer experience. Having a clear mission gives us a shared focus to strategize and collaborate across teams and technologies. By eliminating boundaries among technologies or engineering teams, we’ve been able to create a product far greater than the sum of its parts.

We also made a point to collaborate with our early customers. We’ve used public preview to learn and improve how we deliver security in a way that supports customer and partner needs. Working closely with a wide range of customers has helped shape our investments in hardware, features, capabilities, and services. To support customers across the breadth of their IoT journeys, we’ve built strong partnerships with leading silicon and hardware manufacturers. This gives customers more choice, more implementation options, and new offerings that can speed time to market. Right now, customers are using Azure Sphere to securely connect everything from espresso machines to datacenters. Between those examples, there’s a whole range of use cases for home and commercial appliances, industrial manufacturing equipment, smart energy solutions, and so much more.

Our customers across a wide array of industries are putting their trust in Azure Sphere as they connect and secure equipment, drive improvements, reduce costs, and mitigate the real risks that cyberattacks present.

The Azure Sphere commitment

“Culture eats strategy for breakfast.” Only when we ground everything we do in our culture, can we support what’s necessary to execute a brilliant strategy. What we’ve set out to achieve with Azure Sphere is ambitious and Microsoft is deeply invested in a culture that can support the most ambitious ideas. We apply a growth mindset to everything we do and always strive to learn more about our customers. We actively seek diversity and practice inclusion as we work together toward the ultimate pursuit of making a difference in the world. Guided by our belief that a strong culture is an essential foundation for bringing our vision to life, we’ve focused on culture from the beginning.

To bring together the right technology and tactics as a single, end-to-end solution at scale, is an amazing amount of work that requires true teamwork. We’ve built a team with a broad variety of backgrounds, experience, and expertise across multiple disciplines to work together on Azure Sphere. To support collaboration and creativity, we have nurtured the Microsoft cultural values by practicing fearlessness, trustworthiness, and kindness. Without a strong and positive culture, the work we do would be much harder and far less fun. Our culture gives us the confidence to tackle seemingly impossible challenges and the freedom to take bold steps forward.

Azure Sphere general availability is a culmination of the focus, commitment, and investment we make as a team to realize our shared vision. I’m incredibly proud of the Azure Sphere team and what we’ve built together. And I’m grateful to share this accomplishment with all of the teammates, partners, and customers who have been a part of our journey to general availability. We’re ready to be our customers’ trusted partner in device security so that they can focus on unleashing innovation in their products and in their businesses.

If you are interested in learning more about how Azure Sphere can help you securely fast track your next IoT innovation:

Updated ThinkPad laptop portfolio provides a smarter workforce experience

Custom Data Format: Evolving HTML and CSS language features

Burst 4K encoding on Azure Kubernetes Service

$
0
0

Burst encoding in the cloud with Azure and Media Excel HERO platform.

Content creation has never been as in demand as it is today. Both professional and user-generated content has increased exponentially over the past years. This puts a lot of stress on media encoding and transcoding platforms. Add the upcoming 4K and even 8K to the mix and you need a platform that can scale with these variables. Azure Cloud compute offers a flexible way to grow with your needs. Microsoft offers various tools and products to fully support on-premises, hybrid, or native cloud workloads. Azure Stack offers support to a hybrid scenario for your computing needs and Azure ARC helps you to manage hybrid setups.

Finding a solution

Generally, 4K/UHD live encoding is done on dedicated hardware encoder units, which cannot be hosted in a public cloud like Azure. With such dedicated hardware units hosted on-premise that need to push 4K into the Azure data center the immediate problem we face is a need for high bandwidth network connection between the encoder unit on-premise and Azure data center. In general, it's a best practice to ingest into multiple regions, increasing the load on the network connected between the encoder and the Azure Datacenter.

How do we ingest 4K content reliably into the public cloud?

Alternatively, we can encode the content in the cloud. If we can run 4K/UHD live encoding in Azure, its output can be ingested into Azure Media Services over the intra-Azure network backbone which provides sufficient bandwidth and reliability.

How can we reliably run and scale 4K/UHD live encoding on the Azure cloud as a containerized solution? Let's explore below. 

Azure Kubernetes Service

With Azure Kubernetes Services (AKS) Microsoft offers a managed Kubernetes platform to customers. It is a hosted Kubernetes platform without having to spend a lot of time creating a cluster with all the necessary configuration burden like networking, cluster masters, and OS patching of the cluster nodes. It also comes with pre-configured monitoring seamlessly integrating with Azure Monitor and Log Analytics. Of course, it still offers flexibility to integrate your own tools. Furthermore, it is still just the plain vanilla Kubernetes and as such is fully compatible with any existing tooling you might have running on any other standard Kubernetes platform.

Media Excel encoding

Media Excel is an encoding and transcoding vendor offering physical appliance and software-based encoding solutions. Media Excel has been partnering with Microsoft for many years and engaging in Azure media customer projects. They are also listed as recommended and tested contribution encoder for Azure Media Services for fMP4. There has also work done by both Media Excel and Microsoft to integrate SCTE-35 timed metadata from Media Excel encoder to an Azure Media Services Origin supporting Server-Side Ad Insertion (SSAI) workflows.

Networking challenge

With increasing picture quality like 4K and 8K, the burden on both compute and networking becomes a significant architecting challenge. In a recent engagement with a customer, we needed to architect a 4K live streaming platform with a challenge of limited bandwidth capacity from the customer premises to one of our Azure Datacenters. We worked with Media Excel to build a scalable containerized encoding platform on AKS. Utilizing cloud compute and minimizing network latency between Encoder and Azure Media Services Packager. Multiple bitrates with a top bitrate up to 4Kp60@20Mbps of the same source are generated in the cloud and ingested into the Azure Media Services platform for further processing. This includes Dynamic Encryption and Packaging. This setup enables the following benefits:

  • Instant scale to multiple AKS nodes
  • Eliminate network constraints between customer and Azure Datacenter
  • Automated workflow for containers and easy separation of concern with container technology
  • Increased level of security of high-quality generated content to distribution
  • Highly redundant capability
  • Flexibility to provide various types of Node pools for optimized media workloads

In this particular test, we proved that the intra-Azure network is extremely capable of shipping high bandwidth, latency-sensitive 4K packets from a containerized encoder instance running in West Europe to both East US and Honk Kong Datacenter Regions. This allows the customer to place origin closer to them for further content conditioning.

High-level Architecture of used Azure components for 4K encoding in the Azure cloud.

Workflow:

  1. Azure Pipeline is triggered to deploy onto the AKS cluster. In the YAML file (which you can find on Github) there is a reference to the Media Excel Container in Azure Container Registry.
  2. AKS starts deployment and pulls container from Azure Container Registry.
  3. During Container start custom PHP script is loaded and container is added to the HMS (Hero Management Service). And placed into the correct device pool and job.
  4. Encoder loads source and (in this case) push 4K Livestream into Azure Media Services.
  5. Media Services packaged Livestream into multiple formats and apply DRM (digital rights management).
  6. Azure Content Deliver Network scales livestream.

Scale through Azure Container Instances

With Azure Kubernetes Services you get the power of Azure Container Instances out of the box. Azure Container Instances are a way to instantly scale to pre-provisioned compute power at your disposal. When deploying Media Excel encoding instances to AKS you can specify where these instances will be created. This offers you the flexibility to work with variables like increased density on cheaper nodes for low-cost low priority encoding jobs or more expensive nodes for high throughput high priority jobs. With Azure Container Instances you can instantly move workloads to standby compute power without provisioning time. You only pay for the compute time offering full flexibility for customer demand and future change in platform needs. With Media Excel’s flexible Live/File based encoding roles you can easily move workloads across different compute power offered by AKS and Azure Container Instances.

Container Creating on Azure Kubernetes Services (AKS)

Media Excel Hero Management System showing all Container Instances.

Azure DevOps pipeline to bring it all together

All the general benefits that come with containerized workload apply in the following case. For this particular proof-of-concept, we created an automated deployment pipeline in Azure DevOps for easy testing and deployment. With a deployment YAML and Pipeline YAML we can easily automate deployment, provisioning and scaling of a Media Excel encoding container. Once DevOps pushes the deployment job onto AKS a container image is pulled from Azure Container Registry. Although container images can be bulky utilizing node side caching of layers any additional container pull is greatly improved down to seconds. With the help of Media Excel, we created a YAML file container pre- and post-container lifecycle logic that will add and remove a container from Media Excel's management portal. This offers an easy single pane of glass management of multiple instances across multiple node types, clusters, and regions.

This deployment pipeline offers full flexibility to provision certain multi-tenant customers or job priority on specific node types. This unlocks the possibility of provision encoding jobs on GPU enabled nodes for maximum throughput or using cheaper generic nodes for low priority jobs.

Deployment Release Pipeline in Azure DevOps.

Azure Media Services and Azure Content Delivery Network

Finally, we push the 4K stream into Azure Media Services. Azure Media Services is a cloud-based platform that enables you to build solutions that achieve broadcast-quality video streaming, enhance accessibility and distribution, analyze content, and much more. Whether you're an app developer, a call center, a government agency, or an entertainment company, Media Services helps you create apps that deliver media experiences of outstanding quality to large audiences on today’s most popular mobile devices and browsers.

Azure Media Services is seamlessly integrated with Azure Content Delivery Network. With Azure Content Delivery Network we offer a true multi CDN with choices of Azure Content Delivery Network from Microsoft, Azure Content Delivery Network from Verizon, and Azure Content Delivery Network from Akamai. All of this through a single Azure Content Delivery Network API for easy provisioning and management. As an added benefit, all CDN traffic between Azure Media Services Origin and CDN edge is free of charge.

With this setup, we’ve demonstrated that Cloud encoding is ready to handle real-time 4K encoding across multiple clusters. Thanks to Azure services like AKS, Container Registry, Azure DevOps, Media Services, and Azure Content Delivery Network, we demonstrated how easy it is to create an architecture that is capable of meeting high throughput time-sensitive constraints.

Fileless attack detection for Linux in preview

$
0
0

This blog post was co-authored by Aditya Joshi, Senior Software Engineer, Enterprise Protection and Detection.

Attackers are increasingly employing stealthier methods to avoid detection. Fileless attacks exploit software vulnerabilities, inject malicious payloads into benign system processes, and hide in memory. These techniques minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions.

To counter this threat, Azure Security Center released fileless attack detection for Windows in October 2018. Our blog post from 2018 explains how Security Center can detect shellcode, code injection, payload obfuscation techniques, and other fileless attack behaviors on Windows. Our research indicates the rise of fileless attacks on Linux workloads as well.

Today, Azure Security Center is happy to announce a preview for detecting fileless attacks on Linux.  In this post, we will describe a real-world fileless attack on Linux, introduce our fileless attack detection capabilities, and provide instructions for onboarding to the preview. 

Real-world fileless attack on Linux

One common pattern we see is attackers injecting payloads from packed malware on disk into memory and deleting the original malicious file from the disk. Here is a recent example:

  1. An attacker infects a Hadoop cluster by identifying the service running on a well-known port (8088) and uses Hadoop YARN unauthenticated remote command execution support to achieve runtime access on the machine. Note, the owner of the subscription could have mitigated this stage of the attack by configuring Security Center JIT.
  2. The attacker copies a file containing packed malware into a temp directory and launches it.
  3. The malicious process unpacks the file using shellcode to allocate a new dynamic executable region of memory in the process’s own memory space and injects an executable payload into the new memory region.
  4. The malware then transfers execution to the injected ELF entry point.
  5. The malicious process deletes the original packed malware from disk to cover its tracks. 
  6. The injected ELF payload contains a shellcode that listens for incoming TCP connections, transmitting the attacker’s instructions.

This attack is difficult for scanners to detect. The payload is hidden behind layers of obfuscation and only present on disk for a short time.  With the fileless attack detection preview, Security Center can now identify these kinds of payloads in memory and inform users of the payload’s capabilities.

Fileless attacks detection capabilities

Like fileless attack detection for Windows, this feature scans the memory of all processes for evidence of fileless toolkits, techniques and behaviors. Over the course of the preview, we will be enabling and refining our analytics to detect the following behaviors of userland malware:

  • Well known toolkits and crypto mining software. 
  • Shellcode, injected ELF executables, and malicious code in executable regions of process memory.
  • LD_PRELOAD based rootkits to preload malicious libraries.
  • Elevation of privilege of a process from non-root to root.
  • Remote control of another process using ptrace.

In the event of a detection, you receive an alert in the Security alerts page. Alerts contain supplemental information such as the kind of techniques used, process metadata, and network activity. This enables analysts to have a greater understanding of the nature of the malware, differentiate between different attacks, and make more informed decisions when choosing remediation steps.

 Picture1

The scan is non-invasive and does not affect the other processes on the system.  The vast majority of scans run in less than five seconds. The privacy of your data is protected throughout this procedure as all memory analysis is performed on the host itself. Scan results contain only security-relevant metadata and details of suspicious payloads.

Getting started

To sign-up for this specific preview, or our ongoing preview program, indicate your interest in the "Fileless attack detection preview."

Once you choose to onboard, this feature is automatically deployed to your Linux machines as an extension to Log Analytics Agent for Linux (also known as OMS Agent), which supports the Linux OS distributions described in this documentation. This solution supports Azure, cross-cloud and on-premise environments. Participants must be enrolled in the Standard or Standard Trial pricing tier to benefit from this feature.

To learn more about Azure Security Center, visit the Azure Security Center page.


TraceProcessor 1.0.0

$
0
0

TraceProcessor version 1.0.0 is now available on NuGet with the following package ID:

Microsoft.Windows.EventTracing.Processing.All

This release contains bug fixes, API finalization and minor enhancements since version 0.3.0. Most of these changes were released recently in version 0.4.0. (A full changelog is below). Basic usage is still the same as in version 0.1.0 and version 0.2.0.

With version 1.0.0, we have stabilized the API, and following semantic versioning, no breaking changes (source or binary) will be made within the 1.x.y versions of these packages.

Note that there are a few parts of the API that are in preview and under active development; they may change in future releases; namely, the following types:

  • IEventConsumer
  • IScheduledConsumer
  • ICompletable
  • ConsumerSchedule
  • ExtendedDataItem
  • ExtendedDataItemReadOnlySpan
  • ICompletableTwoPassEventConsumer
  • IFilteredEventConsumer
  • IFilteredTwoPassEventConsumer
  • ITwoPassEventConsumer
  • TraceEventCallback
  • UnparsedGenericEvent

As before, if you find these packages useful, we would love to hear from you, and we welcome your feedback. For questions using this package, you can post on StackOverflow with the tag .net-traceprocessing, and issues can also be filed on the eventtracing-processing project on GitHub.

The full changelog for version 1.0.0 is as follows:

Breaking Changes (previously included in v0.4.0)

  • On IWindowsTracePreprocessorEvent, ProviderId has been renamed PreprocessorProviderId.
  • Throughout the API, duration properties are now of type TraceDuration or TimeSpan rather than Duration. TraceDuration has been used where the time represents the length trace events, and TimeSpan has been used otherwise. (TraceDuration implicitly converts to Duration.)
  • UserData has been renamed to TraceEvent.Data and UserDataReader has been renamed to EventDataReader.
  • IThreadStack has been split into IThreadStack and IStackSnapshot. Stacks without thread and timestamp data are now just IThreadStack, and pattern matching and stringification are now supported for these stacks, including heap snapshot stacks. Full IStackSnapshot instances (which inherit from IThreadStack) work as IThreadStack did previously.
  • Struct properties that convert to another type now consistently omit any prefix. For example, the property is named TraceDuration.TimeSpan rather than TraceDuration.ToTimeSpan.
  • IStackFrame has been replaced with the StackFrame structure for better memory usage.
  • Numeric types are consistently represented as int or long for consistency with .NET.
  • The timestamp context extension methods Create(Timestamp or nanoseconds) and CreateTraceDuration(Timestamp or nanoseconds) have been replaced with CreateApproximate/CreateApproximateTraceDuration. Where possible, these methods create a TraceTimestamp or TraceDuration with an approximate .Value rather than having .IsPartial set to true.
  • StackId properties on IHeapAllocation are nullable to reflect missing data cases explicitly.

New Data Exposed (previously included in v0.4.0)

  • IImage now provides FileOffset.
  • An extension method on IImage, GetProcessAddress, supports turning a relative virtual address (RVA) into a process address that can be used to look up symbols.
  • StackFrame now provides a RelativeVirtualAddress property.
  • IWindowsTracePreprocessorEvent now has a PreprocessorProviderName property. This property requires a new version of the toolkit to function, which has not yet been released in an non-preview Windows SDK.
  • IGenericEventField now provides a DateTimeType.
  • IGenericEventField now supports a Type of TimeSpan (.AsTimeSpan and .AsTimeSpanList) for ETW timestamps.

Bug Fixes (previously included in v0.4.0)

  • All TraceTimestamps from the same trace can now be compared, even if one is Partial and the other is not.

Other (new in v1.0.0)

  • Console output and error produced during trace processing can be redirected via an extension method trace.Process(Stream, Stream).

The post TraceProcessor 1.0.0 appeared first on Windows Developer Blog.

Qt to support Visual Studio Linux projects

$
0
0

Qt is a popular cross-platform framework for application development and user interface design. Its various libraries and toolsets can be used to create, test, and deploy applications that target multiple platforms and operating systems including Linux, Windows, macOS and embedded/microcontroller systems. Qt recently announced its plan to support Visual Studio Linux projects in an upcoming release of the Qt Visual Studio Tools extension, scheduled for this summer.

The Qt Company Logo

“Since the introduction of the C++ Linux workload, users have had the possibility of working on Linux development in Visual Studio.  This feature is of potential interest to Qt developers, given the cross-platform nature of Qt itself, which is why we are now planning to add support for it in the Qt VS Tools extension.” – Miguel Costa @ The Qt Company

 

This work will build on Qt’s support for MSBuild-based Windows projects and will allow you to build and run Qt-enabled projects on both Windows and Linux. You can use Qt Visual Studio Tools with MSBuild-based Windows projects today.

We’re very excited about this work and the ability to leverage the power of Qt for Linux development in Visual Studio! You can read the full story on the Qt blog.

The post Qt to support Visual Studio Linux projects appeared first on C++ Team Blog.

More Spectre Mitigations in MSVC

$
0
0

In a previous blog post, Microsoft described the Spectre mitigations available under /Qspectre. These mitigations, while not significantly impacting performance, do not protect against all possible speculative load attacks. We are now adding two new switches /Qspectre-load and /Qspectre-load-cf to provide a more complete mitigation of Spectre attacks based on loads for customers. These switches are only available on x86 and x64 platforms.

What do the new switches do?

The /Qspectre-load flag specifies compiler generation of serializing instructions for every load instruction. For most loads, this entails adding an LFENCE instruction after the load instruction. However, for control flow instructions, this approach does not work. In most cases, the instruction can be split into the load and control flow, so an LFENCE can be inserted after the load. When this is not possible, such as for jmp [rax] the compiler uses an alternate mitigation strategy, loading the target non-destructively before inserting an LFENCE as follows:

xor rbx, [rax]
xor rbx, [rax] 
lfence
jmp [rax]

The /Qspectre-load-cf flag provides a subset of this behavior, only protecting control flow instructions: JMP, RET, and CALL.

If there are performance critical blocks of code that do not require protection, then you can disable these mitigations using __declspec(spectre(nomitigation)). As these switches stop speculation of all loads, the performance impact is very high, so this mitigation is not appropriate everywhere.

What versions of MSVC support the /Qspectre-load and /Qspectre-load-cf switches?

These switches will be available starting in Visual Studio 16.5 preview 3. These switches will be available in MSVC toolsets included in all future releases of Visual Studio.

How do I enable this?

Starting from Visual Studio 2019 version 16.5 Preview 3, developers can use these new Spectre mitigation options. To enable either new flag, select the flag you want from “Spectre Mitigation” under the “Code Generation” section of the project Property Pages:

Screencap of the Spectre Mitigation option in the project properties.

Your feedback is key to deliver the best experience. If you have any questions, please feel free to ask us below. You can also send us your comments through e-mail. If you encounter problems with the experience or have suggestions for improvement, please Report A Problem or reach out via Developer Community. You can also find us on Twitter @VisualC.

The post More Spectre Mitigations in MSVC appeared first on C++ Team Blog.

Custom AI-Assisted IntelliSense for your team

$
0
0

As you’ve been editing code, you may have noticed IntelliCode’s starred recommendations in your autocompletion lists. Our previous IntelliCode blog post explains that these smarter suggestions were machine-learned over thousands of open sourced GitHub repos. Using community knowledge is great for public APIs like the Standard Library, but what if you want IntelliCode suggestions for your APIs and other libraries that wouldn’t commonly be found in open-source code? To address this, in Visual Studio 2019 version 16.5 Preview 3 you can now train custom IntelliCode models on your own codebases. This generates something we call a “Team Completions model, because you’ll start to get suggestions based on your team’s coding patterns. 

Team Completion model training is a Preview Feature. We look forward to your feedback as we continue to iterate. Currently, the training results may vary depending on the complexity of your configuration and platform settings. 

How do I create and use my own model? 

First, ensure that “C++ team models for completions” is Enabled under Tools > Options > IntelliCode > General > Preview FeaturesThe simplest way to train and test out a model is via View > Other Windows > Train IntelliCode Model for this RepositoryThis will instantly start training a model on your codebase. After training, your first member list invocation will load your new Team Completions model, and subsequent invocations will begin to use the new model. 

Image training 

Anyone who has access to the repo will automatically get the model when they open the repo. This way your whole team can benefit without everyone needing to individually train a model. 

  Image customCompletion

Note that we don’t upload your raw source code to our servers. You can learn more about what happens when you train a model in ourFAQ. 

Manually retraining your model 

You shouldn’t need to retrain your model often. You’ll benefit from retraining if you’ve made significant code changes that you’d like to be reflected in IntelliCode’s recommendations. In the case that you do want to retrain, you can go through the same manual process from the section above. 

Automatically creating and retraining a model via Azure Pipelines 

If you don’t want to have to think about retraining, you can automatically create and retrain a model as part of your continuous integration pipeline in Azure Pipelines. You’ll need to install the Visual Studio IntelliCode Team Model Training task from Visual Studio Marketplace to your Azure DevOps organization or Azure DevOps Server. This way, when code changes are pushed to your repo, the build task runs and your team completions model is retrained. For more detailed instructions, please follow this document on configuring and automating the build task. 

Give us your feedback 

Download Visual Studio 2019 version 16.5 Preview 3 today and give it a try. We’d love your input as we continue to improve Team Completions for C++. We can be reached via the comments below, email (visualcpp@microsoft.com), and Twitter (@VisualC). The best way to file a bug or suggest a feature is via Developer Community. 

 

The post Custom AI-Assisted IntelliSense for your team appeared first on C++ Team Blog.

Reminder: Visual Studio for Mac: Refresh(); event on Feb 24

$
0
0

The Visual Studio for Mac Refresh(); event is just a few days away, starting on Monday, February 24, at 9 AM. We’ve got a great day of content planned. Make sure to “save the date” so you don’t forget over the weekend!

Don’t miss the Code Party!

You’ll want to stay through the end of the day to catch the virtual attendee party. Jeff Fritz and I will be co-hosting this, with lots of trivia questions and prizes from Mobilize.Net, Uno Platform, DevExpress, LEADTOOLS, Octopus Deploy, Progress, PreEmptive, AIMHI, Gnostice, and Syncfusion.

In addition to some great prizes from our vendor sponsors, we’re including some brand new, special edition Visual Studio for Mac swag, including stickers, t-shirts, and coffee cups with some top Visual Studio for Mac keyboard shortcuts!

Visual Studio for Mac - Keyboard Shortcuts mug

Visual Studio for Mac: Refresh(); Coffee Mug

 

Visual Studio for Mac: Refresh(); shirts

A Full Day of .NET for Mac Devs

This is an information packed day from start to finish. We’ve got a great keynote featuring Amanda Silver and Scott Hunter and some great demos, followed by deep dive sessions from an amazing speaker team, taking your questions live on Twitter and Twitch. Don’t forget to use the #VSforMacRefresh hashtag when reaching out to us.

Time Session Speakers
9:00 AM Keynote: A Fresh Look at Visual Studio for Mac Amanda Silver
Scott Hunter
Jon Galloway
9:50 AM Building Blazor applications on a Mac Dan Roth
Kendra Havens
10:30 AM Realtime web applications, from your Mac to the Cloud, with SignalR and Azure Brady Gaster
11:10 AM Working with ASP.NET Core on macOS Sayed Ibrahim Hashimi
Jon Galloway
11:50 AM Serverless apps with .NET Core and macOS Jeff Hollan
12:30 AM What’s new for Unity developers on macOS Abdullah Hamed
Sarah Sexton
1:10 PM Building mobile applications with .NET Xamarin Maddy Leger
James Montemagno
1:50 PM How to be productive developing with .NET on a Mac Mikayla Hutchinson
Kendra Havens
2:30 PM Closing and Virtual Attendee Party Jeff Fritz
Jon Galloway

 

Keynote: A Fresh Look at Visual Studio for Mac

Amanda Silver, Scott Hunter, Jon Galloway

Whether you’re completely new to Visual Studio for Mac, haven’t used it in a while, or use it daily and want to learn more, now’s a great time for a close look. Over the last few releases, we’ve added features and improved the existing experience to make Visual Studio for Mac the premier environment for building .NET applications on the Mac. Join us for this keynote presentation to learn how to make web, mobile, and game development easy and productive on your Mac.

Building Blazor Applications on a Mac

Daniel Roth, Kendra Havens

Blazor lets you build interactive web UIs using C# instead of JavaScript, with rich web UI components implemented using C#, HTML, and CSS. Both client and server code are written in C#, allowing you to share code and libraries. Visual Studio 2019 for Mac includes support for building Blazor Server applications, and in this session, Daniel Roth and Kendra Havens are going to show you how to get started!

Realtime Web Applications, From Four Mac to the Cloud, with SignalR + Azure

Brady Gaster

Want to build interactive applications that deliver up-to-date information without making your users constantly hit a refresh button? It’s hard, especially if you want to gracefully degrade for older clients. Fortunately, SignalR helps you build that functionality into your server-side code quickly, with great standards support and fallback! Combine that with Azure SignalR service, and you can quickly build real-time applications at scale, all from your Mac.

ASP.NET Core on macOS

Sayed Ibrahim Hashimi, Jon Galloway

ASP.NET Core is a web framework for building web apps and services… and did we mention that they’re crazy fast? We’ll show you how Visual Studio for Mac is packed with feature to help you get going quickly and stay productive, with project templates, scaffolding, debugging, deployment, and more.

Serverless on a Mac

Jeff Hollan

Serverless applications let you develop efficient, cost effective applications without worrying about application infrastructure. With Visual Studio for Mac, you can take advantage of the benefits of serverless from a familiar and productive development experience.  In this session we’ll go over how to get started developing, debugging, and testing your serverless apps on a Mac, then show you how to deploy your app to the cloud at scale in a few commands.

Building Mobile Applications on .NET with Xamarin

James Montemagno, Maddy Leger

What if you could build applications for iOS, Android, and Windows with the productivity of .NET, while still providing great native experiences? Xamarin to the rescue! James and Maddy will show you how, and show off some great new features for Xamarin development on macOS.

Building Games with Unity

Abdullah Hamed, Sarah Sexton

Unity is a game development platform for building 2D and 3D games using .NET that run on 25+ platforms across mobile, desktop, console, TV, VR, AR, and the Web. Come join the fun as Abdullah shows you how to build games on your Mac.

How to be Productive Developing .NET on a Mac

Mikayla Hutchinson, Kendra Havens

Now that you’ve seen all the different types of applications you can build using Visual Studio for Mac, we’ll show you some great tips that will make your development experience more productive and fun! You’ll learn about tons of features you didn’t know existed, optional tweaks, shortcuts, and more!

Closing and Virtual Attendee Party

Jeff Fritz, Jon Galloway

Join our live stream attendee party for live chat, Q&A, trivia, giveaways, and some fun surprises! We’ll be streaming live from Channel 9 Studio, taking your questions on Twitch and Twitter, talking about what we saw during the day, and where you can go next to get involved and learn more.

We look forward to seeing you on February 24!

The post Reminder: Visual Studio for Mac: Refresh(); event on Feb 24 appeared first on Visual Studio Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>