Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Windows 10 SDK Preview Build 18361 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18361 or greater). The Preview SDK Build 18361 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1809 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install on Windows 10 Insider Preview builds and supported Windows operating systems.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18361 once the static URL is published.

Tools Updates

Message Compiler (mc.exe)

  • The “-mof” switch (to generate XP-compatible ETW helpers) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated ETW helpers to expect Vista or later.
  • The “-A” switch (to generate .BIN files using ANSI encoding instead of Unicode) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated .BIN files to use Unicode string encoding.
  • The behavior of the “-A” switch has changed. Prior to Windows 1607 Anniversary Update SDK, when using the -A switch, BIN files were encoded using the build system’s ANSI code page. In the Windows 1607 Anniversary Update SDK, mc.exe’s behavior was inadvertently changed to encode BIN files using the build system’s OEM code page. In the 19H1 SDK, mc.exe’s previous behavior has been restored and it now encodes BIN files using the build system’s ANSI code page. Note that the -A switch is deprecated, as ANSI-encoded BIN files do not provide a consistent user experience in multi-lingual systems.

Breaking Changes

IAppxPackageReader2 has been removed from appxpackaging.h

The interface IAppxPackageReader2 was removed from appxpackaging.h. Eliminate the use of use of IAppxPackageReader2 or use IAppxPackageReader instead.

Change to effect graph of the AcrylicBrush

In this Preview SDK, we’ll be adding a blend mode to the effect graph of the AcrylicBrush called Luminosity. This blend mode will ensure that shadows do not appear behind acrylic surfaces without a cutout. We will also be exposing a LuminosityBlendOpacity API available for tweaking that allows for more AcrylicBrush customization.

By default, for those that have not specified any LuminosityBlendOpacity on their AcrylicBrushes, we have implemented some logic to ensure that the Acrylic will look as similar as it can to current 1809 acrylics. Please note that we will be updating our default brushes to account for this recipe change.

TraceLoggingProvider.h  / TraceLoggingWrite

Events generated by TraceLoggingProvider.h (e.g. via TraceLoggingWrite macros) will now always have Id and Version set to 0.

Previously, TraceLoggingProvider.h would assign IDs to events at link time. These IDs were unique within a DLL or EXE, but changed from build to build and from module to module.

API Updates, Additions and Removals

Additions:


namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSession : IClosable {
    public LearningModelSession(LearningModel model, LearningModelDevice deviceToRunOn, LearningModelSessionOptions learningModelSessionOptions);
  }
  public sealed class LearningModelSessionOptions
  public sealed class TensorBoolean : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorBoolean CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorBoolean CreateFromShapeArrayAndDataArray(long[] shape, bool[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorDouble : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorDouble CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorDouble CreateFromShapeArrayAndDataArray(long[] shape, double[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat16Bit CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, short[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, int[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, long[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorString : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorString CreateFromShapeArrayAndDataArray(long[] shape, string[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, ushort[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
   public static TensorUInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, uint[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, ulong[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
}
namespace Windows.ApplicationModel {
  public sealed class Package {
    StorageFolder EffectiveLocation { get; }
    StorageFolder MutableLocation { get; }
  }
}
namespace Windows.ApplicationModel.AppService {
  public sealed class AppServiceConnection : IClosable {
    public static IAsyncOperation<StatelessAppServiceResponse> SendStatelessMessageAsync(AppServiceConnection connection, RemoteSystemConnectionRequest connectionRequest, ValueSet message);
  }
  public sealed class AppServiceTriggerDetails {
    string CallerRemoteConnectionToken { get; }
  }
  public sealed class StatelessAppServiceResponse
  public enum StatelessAppServiceResponseStatus
}
namespace Windows.ApplicationModel.Background {
  public sealed class ConversationalAgentTrigger : IBackgroundTrigger
}
namespace Windows.ApplicationModel.Calls {
  public sealed class PhoneLine {
    string TransportDeviceId { get; }
    void EnableTextReply(bool value);
  }
  public enum PhoneLineTransport {
    Bluetooth = 2,
  }
  public sealed class PhoneLineTransportDevice
}
namespace Windows.ApplicationModel.Calls.Background {
  public enum PhoneIncomingCallDismissedReason
  public sealed class PhoneIncomingCallDismissedTriggerDetails
  public enum PhoneTriggerType {
    IncomingCallDismissed = 6,
  }
}
namespace Windows.ApplicationModel.Calls.Provider {
  public static class PhoneCallOriginManager {
    public static bool IsSupported { get; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ConversationalAgentSession : IClosable
  public sealed class ConversationalAgentSessionInterruptedEventArgs
  public enum ConversationalAgentSessionUpdateResponse
  public sealed class ConversationalAgentSignal
  public sealed class ConversationalAgentSignalDetectedEventArgs
  public enum ConversationalAgentState
  public sealed class ConversationalAgentSystemStateChangedEventArgs
  public enum ConversationalAgentSystemStateChangeType
}
namespace Windows.ApplicationModel.Preview.Holographic {
  public sealed class HolographicKeyboardPlacementOverridePreview
}
namespace Windows.ApplicationModel.Resources {
  public sealed class ResourceLoader {
    public static ResourceLoader GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.Resources.Core {
  public sealed class ResourceCandidate {
    ResourceCandidateKind Kind { get; }
  }
  public enum ResourceCandidateKind
  public sealed class ResourceContext {
    public static ResourceContext GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.UserActivities {
  public sealed class UserActivityChannel {
    public static UserActivityChannel GetForUser(User user);
  }
}
namespace Windows.Devices.Bluetooth.GenericAttributeProfile {
  public enum GattServiceProviderAdvertisementStatus {
    StartedWithoutAllAdvertisementData = 4,
  }
  public sealed class GattServiceProviderAdvertisingParameters {
    IBuffer ServiceData { get; set; }
  }
}
namespace Windows.Devices.Enumeration {
  public enum DevicePairingKinds : uint {
    ProvidePasswordCredential = (uint)16,
  }
  public sealed class DevicePairingRequestedEventArgs {
    void AcceptWithPasswordCredential(PasswordCredential passwordCredential);
  }
}
namespace Windows.Devices.Input {
  public sealed class PenDevice
}
namespace Windows.Devices.PointOfService {
  public sealed class JournalPrinterCapabilities : ICommonPosPrintStationCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class JournalPrintJob : IPosPrinterJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
  public sealed class PosPrinter : IClosable {
    IVectorView<uint> SupportedBarcodeSymbologies { get; }
    PosPrinterFontProperty GetFontProperty(string typeface);
  }
  public sealed class PosPrinterFontProperty
  public sealed class PosPrinterPrintOptions
  public sealed class ReceiptPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class ReceiptPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
    void StampPaper();
  }
  public struct SizeUInt32
  public sealed class SlipPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class SlipPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
}
namespace Windows.Globalization {
  public sealed class CurrencyAmount
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPrimitiveTopology
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicCamera {
    HolographicViewConfiguration ViewConfiguration { get; }
  }
  public sealed class HolographicDisplay {
    HolographicViewConfiguration TryGetViewConfiguration(HolographicViewConfigurationKind kind);
  }
  public sealed class HolographicViewConfiguration
  public enum HolographicViewConfigurationKind
}
namespace Windows.Management.Deployment {
  public enum AddPackageByAppInstallerOptions : uint {
    LimitToExistingPackages = (uint)512,
  }
  public enum DeploymentOptions : uint {
    RetainFilesOnFailure = (uint)2097152,
  }
}
namespace Windows.Media.Devices {
  public sealed class InfraredTorchControl
  public enum InfraredTorchMode
  public sealed class VideoDeviceController : IMediaDeviceController {
    InfraredTorchControl InfraredTorchControl { get; }
  }
}
namespace Windows.Media.Miracast {
  public sealed class MiracastReceiver
  public sealed class MiracastReceiverApplySettingsResult
  public enum MiracastReceiverApplySettingsStatus
  public enum MiracastReceiverAuthorizationMethod
  public sealed class MiracastReceiverConnection : IClosable
  public sealed class MiracastReceiverConnectionCreatedEventArgs
  public sealed class MiracastReceiverCursorImageChannel
  public sealed class MiracastReceiverCursorImageChannelSettings
  public sealed class MiracastReceiverDisconnectedEventArgs
  public enum MiracastReceiverDisconnectReason
  public sealed class MiracastReceiverGameControllerDevice
  public enum MiracastReceiverGameControllerDeviceUsageMode
  public sealed class MiracastReceiverInputDevices
  public sealed class MiracastReceiverKeyboardDevice
  public enum MiracastReceiverListeningStatus
  public sealed class MiracastReceiverMediaSourceCreatedEventArgs
  public sealed class MiracastReceiverSession : IClosable
  public sealed class MiracastReceiverSessionStartResult
  public enum MiracastReceiverSessionStartStatus
  public sealed class MiracastReceiverSettings
 public sealed class MiracastReceiverStatus
  public sealed class MiracastReceiverStreamControl
  public sealed class MiracastReceiverVideoStreamSettings
  public enum MiracastReceiverWiFiStatus
  public sealed class MiracastTransmitter
  public enum MiracastTransmitterAuthorizationStatus
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Wpa3 = 10,
    Wpa3Sae = 11,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class ESim {
    ESimDiscoverResult Discover();
    ESimDiscoverResult Discover(string serverAddress, string matchingId);
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync();
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync(string serverAddress, string matchingId);
  }
  public sealed class ESimDiscoverEvent
  public sealed class ESimDiscoverResult
  public enum ESimDiscoverResultKind
}
namespace Windows.Perception.People {
  public sealed class EyesPose
  public enum HandJointKind
  public sealed class HandMeshObserver
  public struct HandMeshVertex
  public sealed class HandMeshVertexState
  public sealed class HandPose
  public struct JointPose
  public enum JointPoseAccuracy
}
namespace Windows.Perception.Spatial {
  public struct SpatialRay
}
namespace Windows.Perception.Spatial.Preview {
  public sealed class SpatialGraphInteropFrameOfReferencePreview
  public static class SpatialGraphInteropPreview {
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition, Quaternion relativeOrientation);
  }
}
namespace Windows.Security.Authorization.AppCapabilityAccess {
  public sealed class AppCapability
  public sealed class AppCapabilityAccessChangedEventArgs
  public enum AppCapabilityAccessStatus
}
namespace Windows.Security.DataProtection {
  public enum UserDataAvailability
  public sealed class UserDataAvailabilityStateChangedEventArgs
  public sealed class UserDataBufferUnprotectResult
  public enum UserDataBufferUnprotectStatus
  public sealed class UserDataProtectionManager
  public sealed class UserDataStorageItemProtectionInfo
  public enum UserDataStorageItemProtectionStatus
}
namespace Windows.Storage.AccessCache {
  public static class StorageApplicationPermissions {
    public static StorageItemAccessList GetFutureAccessListForUser(User user);
    public static StorageItemMostRecentlyUsedList GetMostRecentlyUsedListForUser(User user);
  }
}
namespace Windows.Storage.Pickers {
  public sealed class FileOpenPicker {
    User User { get; }
    public static FileOpenPicker CreateForUser(User user);
  }
  public sealed class FileSavePicker {
    User User { get; }
    public static FileSavePicker CreateForUser(User user);
  }
  public sealed class FolderPicker {
    User User { get; }
    public static FolderPicker CreateForUser(User user);
  }
}
namespace Windows.System {
  public sealed class DispatcherQueue {
    bool HasThreadAccess { get; }
  }
  public enum ProcessorArchitecture {
    Arm64 = 12,
    X86OnArm64 = 14,
  }
}
namespace Windows.System.Profile {
  public static class AppApplicability
  public sealed class UnsupportedAppRequirement
  public enum UnsupportedAppRequirementReasons : uint
}
namespace Windows.System.RemoteSystems {
  public sealed class RemoteSystem {
    User User { get; }
    public static RemoteSystemWatcher CreateWatcherForUser(User user);
    public static RemoteSystemWatcher CreateWatcherForUser(User user, IIterable<IRemoteSystemFilter> filters);
  }
  public sealed class RemoteSystemApp {
    string ConnectionToken { get; }
    User User { get; }
  }
  public sealed class RemoteSystemConnectionRequest {
    string ConnectionToken { get; }
    public static RemoteSystemConnectionRequest CreateFromConnectionToken(string connectionToken);
    public static RemoteSystemConnectionRequest CreateFromConnectionTokenForUser(User user, string connectionToken);
  }
  public sealed class RemoteSystemWatcher {
    User User { get; }
  }
}
namespace Windows.UI {
  public sealed class UIContentRoot
  public sealed class UIContext
}
namespace Windows.UI.Composition {
  public enum CompositionBitmapInterpolationMode {
    MagLinearMinLinearMipLinear = 2,
    MagLinearMinLinearMipNearest = 3,
    MagLinearMinNearestMipLinear = 4,
    MagLinearMinNearestMipNearest = 5,
    MagNearestMinLinearMipLinear = 6,
    MagNearestMinLinearMipNearest = 7,
    MagNearestMinNearestMipLinear = 8,
    MagNearestMinNearestMipNearest = 9,
  }
  public sealed class CompositionGraphicsDevice : CompositionObject {
    CompositionMipmapSurface CreateMipmapSurface(SizeInt32 sizePixels, DirectXPixelFormat pixelFormat, DirectXAlphaMode alphaMode);
    void Trim();
  }
  public sealed class CompositionMipmapSurface : CompositionObject, ICompositionSurface
  public sealed class CompositionProjectedShadow : CompositionObject
  public sealed class CompositionProjectedShadowCaster : CompositionObject
  public sealed class CompositionProjectedShadowCasterCollection : CompositionObject, IIterable<CompositionProjectedShadowCaster>
  public sealed class CompositionProjectedShadowReceiver : CompositionObject
  public sealed class CompositionProjectedShadowReceiverUnorderedCollection : CompositionObject, IIterable<CompositionProjectedShadowReceiver>
  public sealed class CompositionRadialGradientBrush : CompositionGradientBrush
  public sealed class CompositionSurfaceBrush : CompositionBrush {
    bool SnapToPixels { get; set; }
  }
  public class CompositionTransform : CompositionObject
  public sealed class CompositionVisualSurface : CompositionObject, ICompositionSurface
  public sealed class Compositor : IClosable {
    CompositionProjectedShadow CreateProjectedShadow();
    CompositionProjectedShadowCaster CreateProjectedShadowCaster();
    CompositionProjectedShadowReceiver CreateProjectedShadowReceiver();
    CompositionRadialGradientBrush CreateRadialGradientBrush();
    CompositionVisualSurface CreateVisualSurface();
  }
  public interface IVisualElement
}
namespace Windows.UI.Composition.Interactions {
  public enum InteractionBindingAxisModes : uint
  public sealed class InteractionTracker : CompositionObject {
    public static InteractionBindingAxisModes GetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2);
    public static void SetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2, InteractionBindingAxisModes axisMode);
  }
  public sealed class InteractionTrackerCustomAnimationStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerIdleStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInertiaStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInteractingStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
    public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
  }
}
namespace Windows.UI.Composition.Scenes {
  public enum SceneAlphaMode
  public enum SceneAttributeSemantic
  public sealed class SceneBoundingBox : SceneObject
  public class SceneComponent : SceneObject
  public sealed class SceneComponentCollection : SceneObject, IIterable<SceneComponent>, IVector<SceneComponent>
  public enum SceneComponentType
  public class SceneMaterial : SceneObject
  public class SceneMaterialInput : SceneObject
  public sealed class SceneMesh : SceneObject
  public sealed class SceneMeshMaterialAttributeMap : SceneObject, IIterable<IKeyValuePair<string, SceneAttributeSemantic>>, IMap<string, SceneAttributeSemantic>
  public sealed class SceneMeshRendererComponent : SceneRendererComponent
  public sealed class SceneMetallicRoughnessMaterial : ScenePbrMaterial
  public sealed class SceneModelTransform : CompositionTransform
  public sealed class SceneNode : SceneObject
  public sealed class SceneNodeCollection : SceneObject, IIterable<SceneNode>, IVector<SceneNode>
  public class SceneObject : CompositionObject
  public class ScenePbrMaterial : SceneMaterial
  public class SceneRendererComponent : SceneComponent
  public sealed class SceneSurfaceMaterialInput : SceneMaterialInput
  public sealed class SceneVisual : ContainerVisual
  public enum SceneWrappingMode
}
namespace Windows.UI.Core {
  public sealed class CoreWindow : ICorePointerRedirector, ICoreWindow {
    UIContext UIContext { get; }
  }
}
namespace Windows.UI.Core.Preview {
  public sealed class CoreAppWindowPreview
}
namespace Windows.UI.Input {
  public class AttachableInputObject : IClosable
  public enum GazeInputAccessStatus
  public sealed class InputActivationListener : AttachableInputObject
  public sealed class InputActivationListenerActivationChangedEventArgs
  public enum InputActivationState
}
namespace Windows.UI.Input.Preview {
  public static class InputActivationListenerPreview
}
namespace Windows.UI.Input.Spatial {
  public sealed class SpatialInteractionManager {
    public static bool IsSourceKindSupported(SpatialInteractionSourceKind kind);
  }
  public sealed class SpatialInteractionSource {
    HandMeshObserver TryCreateHandMeshObserver();
    IAsyncOperation<HandMeshObserver> TryCreateHandMeshObserverAsync();
  }
  public sealed class SpatialInteractionSourceState {
    HandPose TryGetHandPose();
  }
  public sealed class SpatialPointerPose {
    EyesPose Eyes { get; }
    bool IsHeadCapturedBySystem { get; }
  }
}
namespace Windows.UI.Notifications {
  public sealed class ToastActivatedEventArgs {
    ValueSet UserInput { get; }
  }
  public sealed class ToastNotification {
    bool ExpiresOnReboot { get; set; }
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    string PersistedStateId { get; set; }
    UIContext UIContext { get; }
    WindowingEnvironment WindowingEnvironment { get; }
    public static void ClearAllPersistedState();
    public static void ClearPersistedState(string key);
    IVectorView<DisplayRegion> GetDisplayRegions();
  }
  public sealed class InputPane {
    public static InputPane GetForUIContext(UIContext context);
  }
  public sealed class UISettings {
    bool AutoHideScrollBars { get; }
    event TypedEventHandler<UISettings, UISettingsAutoHideScrollBarsChangedEventArgs> AutoHideScrollBarsChanged;
  }
  public sealed class UISettingsAutoHideScrollBarsChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    public static CoreInputView GetForUIContext(UIContext context);
  }
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow
  public sealed class AppWindowChangedEventArgs
  public sealed class AppWindowClosedEventArgs
  public enum AppWindowClosedReason
  public sealed class AppWindowCloseRequestedEventArgs
  public sealed class AppWindowFrame
  public enum AppWindowFrameStyle
  public sealed class AppWindowPlacement
  public class AppWindowPresentationConfiguration
  public enum AppWindowPresentationKind
  public sealed class AppWindowPresenter
  public sealed class AppWindowTitleBar
  public sealed class AppWindowTitleBarOcclusion
  public enum AppWindowTitleBarVisibility
  public sealed class CompactOverlayPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DefaultPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DisplayRegion
  public sealed class FullScreenPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class WindowingEnvironment
  public sealed class WindowingEnvironmentAddedEventArgs
  public sealed class WindowingEnvironmentChangedEventArgs
  public enum WindowingEnvironmentKind
  public sealed class WindowingEnvironmentRemovedEventArgs
}
namespace Windows.UI.WindowManagement.Preview {
  public sealed class WindowManagementPreview
}
namespace Windows.UI.Xaml {
  public class UIElement : DependencyObject, IAnimationObject, IVisualElement {
    Vector3 ActualOffset { get; }
    Vector2 ActualSize { get; }
    Shadow Shadow { get; set; }
    public static DependencyProperty ShadowProperty { get; }
    UIContext UIContext { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public class UIElementWeakCollection : IIterable<UIElement>, IVector<UIElement>
  public sealed class Window {
    UIContext UIContext { get; }
  }
  public sealed class XamlRoot
  public sealed class XamlRootChangedEventArgs
}
namespace Windows.UI.Xaml.Controls {
  public sealed class DatePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class FlyoutPresenter : ContentControl {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class InkToolbar : Control {
    InkPresenter TargetInkPresenter { get; set; }
    public static DependencyProperty TargetInkPresenterProperty { get; }
  }
  public class MenuFlyoutPresenter : ItemsControl {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public sealed class TimePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class TwoPaneView : Control
  public enum TwoPaneViewMode
  public enum TwoPaneViewPriority
  public enum TwoPaneViewTallModeConfiguration
  public enum TwoPaneViewWideModeConfiguration
}
namespace Windows.UI.Xaml.Controls.Maps {
  public sealed class MapControl : Control {
    bool CanTiltDown { get; }
    public static DependencyProperty CanTiltDownProperty { get; }
    bool CanTiltUp { get; }
    public static DependencyProperty CanTiltUpProperty { get; }
    bool CanZoomIn { get; }
    public static DependencyProperty CanZoomInProperty { get; }
    bool CanZoomOut { get; }
    public static DependencyProperty CanZoomOutProperty { get; }
  }
  public enum MapLoadingStatus {
    DownloadedMapsManagerUnavailable = 3,
  }
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public sealed class AppBarTemplateSettings : DependencyObject {
    double NegativeCompactVerticalDelta { get; }
    double NegativeHiddenVerticalDelta { get; }
    double NegativeMinimalVerticalDelta { get; }
  }
  public sealed class CommandBarTemplateSettings : DependencyObject {
    double OverflowContentCompactYTranslation { get; }
    double OverflowContentHiddenYTranslation { get; }
    double OverflowContentMinimalYTranslation { get; }
  }
  public class FlyoutBase : DependencyObject {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public sealed class Popup : FrameworkElement {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
  }
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlPropertyIndex {
    AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
    AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
    AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
    CommandBarTemplateSettings_OverflowContentCompactYTranslation = 2384,
    CommandBarTemplateSettings_OverflowContentHiddenYTranslation = 2385,
    CommandBarTemplateSettings_OverflowContentMinimalYTranslation = 2386,
    FlyoutBase_ShouldConstrainToRootBounds = 2378,
    FlyoutPresenter_IsDefaultShadowEnabled = 2380,
    MenuFlyoutPresenter_IsDefaultShadowEnabled = 2381,
    Popup_ShouldConstrainToRootBounds = 2379,
    ThemeShadow_Receivers = 2279,
    UIElement_ActualOffset = 2382,
    UIElement_ActualSize = 2383,
    UIElement_Shadow = 2130,
  }
  public enum XamlTypeIndex {
    ThemeShadow = 964,
  }
}
namespace Windows.UI.Xaml.Documents {
  public class TextElement : DependencyObject {
    XamlRoot XamlRoot { get; set; }
  }
}
namespace Windows.UI.Xaml.Hosting {
  public sealed class ElementCompositionPreview {
    public static UIElement GetAppWindowContent(AppWindow appWindow);
    public static void SetAppWindowContent(AppWindow appWindow, UIElement xamlContent);
  }
}
namespace Windows.UI.Xaml.Input {
  public sealed class FocusManager {
    public static object GetFocusedElement(XamlRoot xamlRoot);
  }
  public class StandardUICommand : XamlUICommand {
    StandardUICommandKind Kind { get; set; }
  }
}
namespace Windows.UI.Xaml.Media {
  public class AcrylicBrush : XamlCompositionBrushBase {
    IReference<double> TintLuminosityOpacity { get; set; }
    public static DependencyProperty TintLuminosityOpacityProperty { get; }
  }
  public class Shadow : DependencyObject
  public class ThemeShadow : Shadow
  public sealed class VisualTreeHelper {
    public static IVectorView<Popup> GetOpenPopupsForXamlRoot(XamlRoot xamlRoot);
  }
}
namespace Windows.UI.Xaml.Media.Animation {
  public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
    bool IsShadowEnabled { get; set; }
  }
}
namespace Windows.Web.Http {
  public sealed class HttpClient : IClosable, IStringable {
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryDeleteAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri, HttpCompletionOption completionOption);
    IAsyncOperationWithProgress<HttpGetBufferResult, HttpProgress> TryGetBufferAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetInputStreamResult, HttpProgress> TryGetInputStreamAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetStringResult, HttpProgress> TryGetStringAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPostAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPutAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
  }
  public sealed class HttpGetBufferResult : IClosable, IStringable
  public sealed class HttpGetInputStreamResult : IClosable, IStringable
  public sealed class HttpGetStringResult : IClosable, IStringable
  public sealed class HttpRequestResult : IClosable, IStringable
}
namespace Windows.Web.Http.Filters {
  public sealed class HttpBaseProtocolFilter : IClosable, IHttpFilter {
    User User { get; }
    public static HttpBaseProtocolFilter CreateForUser(User user);
  }
}

The post Windows 10 SDK Preview Build 18361 available now! appeared first on Windows Developer Blog.


Python in Visual Studio Code – March 2019 Release

$
0
0

We are pleased to announce that the March 2019 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation.  

In this release we made a series of improvements that are listed in our changelog, closing a total of 52 issues that includes: 

  • Live Share support in the Python Interactive Window 
  • Support installing packages with Poetry 
  • Improvements to the Python Language Server 
  • Improvements to the Test Explorer 

 Keep on reading to learn more! 

Live Share for Python Interactive  

Real-time collaboration is made easy with Visual Studio Live Share – it provides you with the ability to co-edit and co-debug while sharing audio, servers, terminals, diffs, comments, and more.  

In this update, the Python Interactive window has been enhanced to participate in Live Share collaboration sessions, making it possible to collaboratively explore and visualize data. Whether you are conducting a code review, pair programming with a teammate, participating in a hack-a-thon, or even teaching an interactive lecture, Live Share can support you in the many ways you collaborate. 

Support installing packages with Poetry  

This new release also adds the ability to use Poetry in Visual Studio Code with the Python extension, a dependency manager that allows you to keep the project’s development dependencies separate from production ones. Poetry support in the Python extension was a highly requested feature on our GitHub repository.

To try out this new feature, first make sure you have Poetry installed as well as the correspondent lock file generated. You can refer to the documentation to learn how to get started with Poetry. Then add the path to Poetry in your settings (through File > Preferences > Settings and searching for Poetry or adding “python.poetryPath”: “path/to/poetry” to your settings.json file). 

Now when you install a new package, it’ll use the provided Poetry path to install them:

Improvements to the Python Language Server

This release includes significant enhancements made to the Python Language Server, which was largely re-written and includes improvements on performance, memory usage and information display, support for relative imports and implicit packages, and understanding of typing, generics, PEP hints and annotations. And now it also offers auto-completion for f-strings and type information when you hover over sub-expressions:

As a reminder, the Language Server was released as a preview the last July release of the Python extension. To opt-in to the Language Server, change the python.jediEnabled setting to false in File > Preferences > User Settings. Since large changes were made to code analysis, there’s a list of known issues introduced that we are currently fixing. If you run into different problems, please file an issue on the Python Language Server GitHub page. We are working towards making the language server the default in future releases.

Improvements to the Test Explorer

On the last February release of the Python extension we added a built-in Test Explorer, that can be accessed through the Test beaker icon on the Activity Bar when tests are discovered in the workspace.

In this release we made improvements to the Test Explorer, including support for multi-root workspaces, parametrized tests and new status icons. The status icons allow you to quickly visualize which tests files or suites have failed without needing to expand the tree.

As a reminder, you can try the Test Explorer out by running the command Python: Discover Unit Tests from the Command Palette (View > Command Palette). If the unit test feature is disabled or no test framework is configured in the settings.json file, you’ll be prompted to select a framework and configure it. Once tests are discovered, the Test Explorer icon will appear on the Activity Bar.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

  • Fixed stopOnEntry not stopping on user code (#1159)
  • Support multiline comments for markdown cells (#4215)
  • Update icons and tooltip in test explorer indicating status of test files/suites (#4583)
  • Added commands translation for polish locale. (thanks pypros) (#4435)

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – March 2019 Release appeared first on Python.

What’s new in Azure IoT Central – March 2019

$
0
0

In IoT Central, our aim is to simplify IoT. We want to make sure your IoT data drives meaningful actions and visualizations. In this post, I will share new features now available in Azure IoT Central including embedded Microsoft Flow, updates to the Azure IoT Central connector, Azure Monitor action groups, multiple dashboards, and localization support. We also recently expanded Jobs functionality in IoT Central, so you can check out the announcement blog post to learn more.

Microsoft Flow is now embedded in IoT Central

You can now build workflows using your favorite connectors directly within IoT Central. For example, you can build a temperature alert rule that triggers a workflow to send push notifications and SMS all in one place within IoT Central. You can also test and share the workflow, see the run history, and manage all workflows attached to that rule.

Try it out in your IoT Central app by visiting Device Templates in Rules, adding a new action, and picking the Microsoft Flow tile.

Embedded Microsoft Flow experience in IoT Central.

Updated Azure IoT Central connector: Send a command and get device actions

With the updated Azure IoT Central connector, you can now build workflows in Microsoft Flow and Azure Logic Apps that can send commands on an IoT device and get device information like the name, properties, and settings values. You can also now build a workflow to tell an IoT device to reboot from a mobile app, and display the device’s temperature setting and location property in a mobile app.

Try it out in Microsoft Flow or Azure Logic Apps by using the Send a command and Get device actions in your workflow.

Get a device and Run a command actions in Microsoft Flow.

Integration with Azure Monitor action groups

Azure Monitor action groups are reusable groups of actions that can be attached to multiple rules at once. Instead of creating separate actions for each rule and entering in the recipient’s email address, SMS number, and webhook URL for each, you can choose an action group that contains all three from a drop down and expect to receive notifications on all three channels. The same action group can be attached to multiple rules and are reusable across Azure Monitor alerts.

Try it out in your IoT Central app by visiting Device Templates in Rules, adding a new action, and then pick the Azure Monitor action groups tile.

Azure monitor action groups as an action for your rules.

Multiple dashboards

Users can now create multiple personal dashboards in their IoT Central app! You can now build customized dashboards to better organize your devices and data. The default application dashboard is still available for all users, but each user of the app can create personalized dashboards and switch between them.

Multiple dashboards: choosing between application dashboards and personal dashboards.

Localization support

As of today, IoT Central supports 17 languages! You can select your preferred language in the settings section in the top navigation, and this will apply when you use any app in IoT Central. Each user can have their own preferred language, and you can change it at any time.

Choosing a different language in IoT Central.

With these new features, you can more conveniently build workflows as actions and reuse groups of actions, organize your visualizations across multiple dashboards, and work with IoT Central with your favorite language. Stay tuned for more developments in IoT Central. Until next time!

Next steps

  • Have ideas or suggestions for new features? Post it on Uservoice.
  • To explore the full set of features and capabilities and start your free trial, visit the IoT Central website.
  • Check out our documentation including tutorials to connect your first device.
  • To give us feedback about your experience with Azure IoT Central, take this survey.
  • To learn more about the Azure IoT portfolio including the latest news, visit the Microsoft Azure IoT page.

High-Throughput with Azure Blob Storage

$
0
0

I am happy to announce that High-Throughput Block Blob (HTBB) is globally enabled in Azure Blob Storage. HTBB provides significantly improved and instantaneous write throughput when ingesting larger block blobs, up to the storage account limits for a single blob. We have also removed the guesswork in naming your objects, enabling you to focus on building the most scalable applications and not worry about the vagaries of cloud storage.

HTBB demo of 12.5GB/s single blob throughput at Microsoft Ignite

I demonstrated the significantly improved write performance at Microsoft Ignite 2018. The demo application orchestrated the upload of 50,000 32MiB (1,600,000 MiB) block blobs from RAM using Put Block operations to a single blob. When all blocks were uploaded, it sent the block list to create the blob using the Put Block List operation. It orchestrated the upload using four D64v3 worker virtual machines (VMs), each VM writing 25 percent of the block blobs. The total time to upload the object took around 120 seconds which is about 12.5GB/s. Check out the demo in the video below to learn more.

GB+ throughput using a single virtual machine

To illustrate the possible performance using just a single VM, I created a D32v3 VM running Linux in US West2. I stored the files to upload on a local RAM disk to reduce local storage performance affecting the results. I then created the files using the head command with input from /dev/urandom to fill them with random data. Finally I used AzCopy v10 (v10.0.4) to upload the files to a standard storage account in the same region. I ran each iteration 5 times and averaged the time to upload in the table below.

Data set Time to upload Throughput
1,000 x 10MB 10 seconds 1.0 GB/s
100 x 100MB 8 seconds 1.2 GB/s
10 x 1GB 8 seconds 1.2 GB/s
1 x 10GB 8 seconds 1.2 GB/s
1 x 100GB 58 seconds 1.7 GB/s

HTBB everywhere

HTBB is active on all your existing storage accounts, and does not require opt-in. It also comes without any extra cost. HTBB doesn’t introduce any new APIs and is automatically active when using Put Block or Put Blob operations over a certain size. The following table lists the minimum required Put Blob or Put Block size to activate HTBB.

Storage Account type Minimum size for HTBB
StorageV2 (General purpose v2) >4MB
Storage (General purpose v1) >4MB
Blob Storage >4MB
BlockBlobStorage (Premium) >256KB

Azure Tools and Services supporting HTBB

There is a broad set of tools and services that already support HTBB, including:

Conclusion

We’re excited about the throughput improvements and application simplifications High-Throughput Block Blob brings to Azure Blob Storage! It is now available in all Azure regions and automatically active on your existing storage accounts at no extra cost. We look forward to hearing your feedback. To learn more about Blob Storage, please visit our product page.

Azure Sphere ecosystem accelerates innovation

$
0
0

The Internet of Things (IoT) promises to help businesses cut costs and create new revenue streams, but it also brings an unsettling amount of risk. No one wants a fridge that gets shut down by ransomware, a toy that spies on children, or a production line that’s brought to a halt through an entry point in a single hacked sensor.

So how can device builders bring a high level of security to the billions of network-connected devices expected to be deployed in the next decade?

It starts with building security into your IoT solution from the silicon up. In this piece, I will discuss the holistic device security of Azure Sphere, as well as how the expansion of the Azure Sphere ecosystem is helping to accelerate the process of taking secure solutions to market. For additional partner-delivered insights around Azure Sphere, view the Azure Sphere Ecosystem Expansion Webinar.

Two women sitting together at a desk working on an Azure Sphere device

A new standard for security

Small, lightweight microcontrollers (or MCUs) are the most common class of computer, powering everything from appliances to industrial equipment. Organizations have learned that security for their MCU-powered devices is critical to their near-term sales and to the long-term success of their brands (one successful attack can drive customers away from the affected brand for years). Yet predicting which devices can endure attacks is difficult.

Through years of experience, Microsoft has learned that to be highly secured, a connected device must possess seven specific properties:

  1. Hardware-based root of trust: The device must have a unique, unforgeable identity that is inseparable from the hardware.
  2. Small trusted computing base: Most of the device's software should be outside a small trusted computing base, reducing the attack surface for security resources such as private keys.
  3. Defense in depth: Multiple layers of defense mean that even if one layer of security is breached, the device is still protected.
  4. Compartmentalization: Hardware-enforced barriers between software components prevent a breach in one from propagating to others.
  5. Certificate-based authentication: The device uses signed certificates to prove device identity and authenticity.
  6. Renewable security: Updated software is installed automatically and devices that enter risky states are always brought into a secure state.
  7. Failure reporting: All device failures, which could be evidence of attacks, are reported to the manufacturer.

These properties work together to keep devices protected and secured in today's dynamic threat landscape. Omitting even one of these seven properties can leave devices open to attack, creating situations where responding to security events is difficult and costly. The seven properties also act as a practical framework for evaluating IoT device security.

How Azure Sphere helps you build secure devices

Azure Sphere – Microsoft’s end-to-end solution for creating highly-secure, connected devices – delivers these seven properties, making it easy and affordable for device manufacturers to create devices that are innately secure and prepared to meet evolving security threats. Azure Sphere introduces a new class of MCU that includes built-in Microsoft security technology and connectivity and the headroom to support dynamic experiences at the intelligent edge.

Multiple levels of security are baked into the chip itself. The secured Azure Sphere OS runs on top of the hardware layer, only allowing authorized software to run. The Azure Sphere Security Service continually verifies the device's identity and authenticity and keeps its software up to date. Azure Sphere has been designed for security and affordability at scale, even for low-cost devices. 

Opportunities for ecosystem expansion

In today’s world, device manufacturing partners view security as a necessity for creating connected experiences. The end-to-end security of Azure Sphere creates a potential for significant innovation in IoT. With a turnkey solution that helps prevent, detect, and respond to threats, device manufacturers don’t need to invest in additional infrastructure or staff to secure these devices. Instead, they can focus their efforts on rethinking business models, product experiences, how they serve customers, and how they predict customer needs.

To accelerate innovation, we’re working to expand our partner ecosystem. Ecosystem expansion offers many advantages. It reduces the overall complexity of the final product and speeds time to market. It frees up device builders to expand technical capabilities to meet the needs of customers. Plus, it enables more responsive innovation of feature sets for module partners and customization of modules for a diverse ecosystem. Below we’ve highlighted some partners who are a key part of the Azure Sphere ecosystem.

Seeed Studio, a Microsoft partner that specializes in hardware prototyping, design and manufacturing for IoT solutions, has been selling their MT3620 Development Board since April 2018. They also sell complementary hardware that enables rapid, solder-free prototyping using their Grove system of modular sensors, actuators, and displays. In September 2018, they released the Seeed Grove starter kit, which contains an expansion shield and a selection of sensors. Besides hardware for prototyping, they are going to launch more vertical solutions based on Azure Sphere for the IoT market. In March, Seeed also introduced the MT3620 Mini Dev Board, a lite version of Seeed’s previous Azure Sphere MT3620 Development Kit. Seeed developed this board to meet the needs of developers who need smaller sizes, greater scalability and lower costs.

AI-Link has released the first Azure Sphere module that is ready for mass production. AI-Link is the top IoT module developer and manufacturer in the market today and shipped more than 90 million units in 2018.

Avnet, an IoT solution aggregator and Azure Sphere chips distributor, unveiled their Azure Sphere module and starter kit in January 2019. Avnet will also be building a library of general and application specific Azure Sphere reference designs to accelerate customer adoption and time to market for Azure Sphere devices and solutions.

Universal Scientific Industrial (Shanghai) Co., Ltd. (USI) recently unveiled their Azure Sphere combo module, uniquely designed for IoT applications, with multi-functionality design-in support by standard SDK. Customers can easily migrate from a discrete MCU solution to build their devices based on this module with secured connectivity to the cloud and shorten design cycle.

Learn more about the Azure Sphere ecosystem

To learn more, view the on-demand Azure Sphere Ecosystem Expansion webinar. You’ll hear from each of our partners as they discuss the Azure Sphere opportunity from their own perspective, as well as how you can take full advantage of Azure Sphere ecosystem expansion efforts.

For in-person opportunities to gain actionable insights, deepen partnerships, and unlock the transformative potential of intelligent edge and intelligent cloud IoT solutions, sign up for an in-person IoT in Action event coming to a city near you.

Azure Stack IaaS – part six

$
0
0

Pay for what you use

In the virtualization days I used to pad all my requests for virtual machines (VM) to get the largest size possible. Since decisions and requests took time, I would ask for more than I required just so I wouldn’t have delays if I needed more capacity. This resulted in a lot of waste and a term I heard often–VM sprawl.

The behavior is different with Infrastructure-as-a-Service (IaaS) VMs in the cloud. A fundamental quality of a cloud is that it provides an elastic pool for your resource to use when needed. Since you only pay for what you use, you don’t need to over provision. Instead, you can optimize capacity based on demand. Let me show you some of the ways you can do this for your IaaS VMs running in Azure and Azure Stack.

Resize

It’s hard to know exactly how big your VM should be. There are so many dimensions to consider, such as CPU, memory, disks, and network. Instead of trying to predict what your VM needs for the next year or even month, why not take a guess, let it run, and then adjust the size once you have some historical data.

Azure and Azure Stack makes it easy for you to resize your VM from the portal. Pick the new size and you’re done. No need to call the infrastructure team and beg for more capacity. No need to over spend for a huge VM that isn’t even used.

Choose a virtual machine size through the portal in Microsoft Azure Stack

Learn more:

Scale out

Another dimension of scale is to make multiple copies of identical VMs to work together as a unit. When you need more, create additional VMs. When you need less, remove some of the VMs. Azure has a feature for this called Virtual Machine Scale Sets (VMSS) which is also available in Azure Stack. You can create a VMSS with a wizard. Fill out the details of how the VM should be configured, including which extensions to use and which software to load onto your VM. Azure takes care of wiring the network, placing the VMs behind a load balancer, creating the VMs, and running the in guest configuration.

Create a virtual machine scale set in Microsoft Azure Stack

Once you have created the VMSS, you can scale it up or down. Azure automates everything for you. You control it like IaaS, but scale it like PaaS. It was never this easy in the virtualization days.

Scale a Virtual Machine Scale Set up or down

Learn more:

Add, remove, and resize disk

Just like virtual machines in the cloud, storage is pay per use. Both Azure and Azure Stack make it easy for you to manage the disks running on that storage so you only need to use what your application requires. Adding, removing, and resizing data disks is a self-service action so you can right-size your VM’s storage based on your current needs.

Add, remove, and resize disk

Learn more:

Usage based pricing

Just like Azure, Azure Stack prices are based on how much you use. Since you take on the hardware and operating costs, Azure Stack service fees are typically lower than Azure prices. Your Azure Stack usage will show up as line items in your Azure bill. If you run your Azure Stack in a network which is disconnected from the Internet, Azure Stack offers a yearly capacity model.

Pay-per-use really benefits Azure Stack customers. For example, one organization runs a machine learning model once a month. It takes about one week for the computation. During this time, they use all the capacity of their Azure Stack, but for the other three weeks of the month, they run light, temporary workloads on the system. A later blog will cover how automation and infrastructure-as-code allows you to quickly set this up and tear it down, allowing you to just use what the app needs in the time window it’s needed. Right-sizing and pay-per-use saves you a lot of money.

Learn more:

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our past and upcoming topics:

Analysis of network connection data with Azure Monitor for virtual machines

$
0
0

Azure Monitor for virtual machines (VMs) collects network connection data that you can use to analyze the dependencies and network traffic of your VMs. You can analyze the number of live and failed connections, bytes sent and received, and the connection dependencies of your VMs down to the process level. If malicious connections are detected it will include information about those IP addresses and threat level. The newly released VMBoundPort data set enables analysis of open ports and their connections for security analysis.

To begin analyzing this data, you will need to be on-boarded to Azure Monitor for VMs.

Workbooks

If you would like to start your analysis with a prebuilt, editable report you can try out some of the Workbooks we ship with Azure Monitor for VMs. Once on-boarded you navigate to Azure Monitor and select Virtual Machines (preview) from the insights menu section. From here, you can navigate to the Performance or Map tab to see a link for View Workbook that will open the Workbook gallery which includes the following Workbooks that analyze our network data:

  • Connections overview
  • Failed connections
  • TCP traffic
  • Traffic comparison
  • Active ports
  • Open ports

These editable reports let you analyze your connection data for a single VM, groups of VMs, and virtual machine scale sets.

Log Analytics

If you want to use Log Analytics to analyze the data, you can navigate to Azure Monitor and select Logs to begin querying the data. The logs view will show the name of the workspace that has been selected and the schema within that workspace. Under the ServiceMap data type you will find two tables:

  • VMBoundPort
  • VMConnection

You can copy and paste the queries below into the Log Analytics query box to run them. Please note, you will need to edit a few of the examples below to provide the name of a computer that you want to query.

Screenshot of copying and pasting queries into the Log Analytics query box

Common queries

Review the count of ports open on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize by Computer, Machine, Port, Protocol
| summarize OpenPorts=count() by Computer, Machine
| order by OpenPorts desc

List the bound ports on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| distinct Computer, Port, ProcessName

Analyze network activity by port to determine how your application or service is configured.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
| project-away TimeGenerated
| order by Machine, Computer, Port, Ip, ProcessName

Bytes sent and received trends for your VMs.

VMConnection
| summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
| order by Computer desc
//| limit 5000
| render timechart

If you have a lot of computers in your workspace, you may want to uncomment the limit statement in the example above. You can use the chart tools to view either bytes sent or received, and to filter down to specific computers.

Screenshot of chart tools being used to view Bytes sent or received

Connection failures over time, to determine if the failure rate is stable or changing.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| extend bythehour = datetime_part("hour", TimeGenerated)
| project bythehour, LinksFailed
| summarize failCount = count() by bythehour
| sort by bythehour asc
| render timechart

Link status trends, to analyze the behavior and connection status of a machine.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| summarize  dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
| render timechart

Screenshot of line chart showing query results from the last 24 hours

Getting started with log queries in Azure Monitor for VMs

To learn more about Azure Monitor for VMs, please read our overview, “What is Azure Monitor for VMs (preview).” If you are already using Azure Monitor for VMs, you can find additional example queries in our documentation for querying data with Log Analytics.

Happy birthday to managed Open Source RDBMS services in Azure!

$
0
0

March 20, 2019 marked the first anniversary of general availability for our managed Open Source relational database management system (RDBMS) services, including Azure Database for PostgreSQL and Azure Database for MySQL. A great year of learning and improvements lays behind us, and we are looking forward to an exciting future!

Thank you to all our customers, who have trusted Azure to host their Open Source Software (OSS) applications with MySQL and PostgreSQL databases. We are very grateful for your support and for pushing us to build the best managed services in the cloud!

It’s amazing to the see the variety of mission critical applications that customers run on top of our services. From line of business applications over real-time event processing to internet of things applications, we see all possible patterns running across our different OSS RDBMS offerings. Check out some great success stories by reading our case studies! It’s humbling to see the trust our customers put in the platform! We love the challenges posed by this variety of use cases, and we are always hungry to learn and provide even more enhanced support.

We wouldn’t have reached this point without ongoing feedback and feature requests from our customers. There have been asks for functionality such as read replicas, greater performance, extended regional coverage, additional RDBMS engines like MariaDB, and more. In response, over the year since our services became generally available, we have delivered features and functionality to address these asks. Just check out some of the announcements we have made over the past year:

We also want to enable customers to focus on using these features when developing their applications. To that end, we are constantly enhancing our compliance certification portfolio to address a broader set of standards. This gives customers the peace of mind, knowing that our services are increasingly safe and secure. We have also introduced features such as Threat Protection (MySQL, PostgreSQL) and Intelligent Performance (PostgreSQL) to the OSS RDBMS services, so there are two fewer things to worry about!

Open Source is all about the community and the ecosystem built around the Open Source products delivered by the community. We want to bring this goodness to our platform and support it so that customers can leverage the benefits when using our managed services. For example, we have recently announced support for GraphQL with Hasura and TimescaleDB! However, we want to be more than a consumer and make significant contributions to the community. Our first major contribution was the release of the Open Source Azure Data Studio with support for PostgreSQL.

While we are proud to highlight these developments, we also understand that we are still at the outset of the journey. We have a lot of work to do and many challenges to overcome, but we are continuing to move ahead at full steam. We are very thrilled to have Citus Data joining the team, and you can expect to see a lot of focus on enabling improved performance, greater scale, and more built-in intelligence. Find more information about this acquisition by visiting the blog post, “Microsoft and Citus Data: Providing the best PostgreSQL service in the cloud.”

Next steps

In the interim, be sure to take advantage of the following, helpful resources.

We look forward to continued feedback and feature requests from our customers. More than ever, we are committed to ensuring that our OSS RDBMS services are top-notch leaders in the cloud! Stay tuned, as we have a lot more in the pipeline!


Azure Blob Storage lifecycle management generally available

$
0
0

Data sets have unique lifecycles. Some data is accessed often early in the lifecycle, but the need for access drops drastically as the data ages. Some data remains idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation while other data sets are actively read and modified throughout their lifetimes.

Today we are excited to share the general availability of Blob Storage lifecycle management so that you can automate blob tiering and retention with custom defined rules. This feature is available in all Azure public regions.

Lifecycle management

Azure Blob Storage lifecycle management offers a rich, rule-based policy which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle.

Lifecycle management policy helps you:

  • Transition blobs to a cooler storage tier such as hot to cool, hot to archive, or cool to archive in order to optimize for performance and cost
  • Delete blobs at the end of their lifecycles
  • Define up to 100 rules
  • Run rules automatically once a day
  • Apply rules to containers or specific subset of blobs, up to 10 prefixes per rule

To learn more visit our documentation, “Managing the Azure Blob storage Lifecycle.”

Example

Consider a data set that is accessed frequently during the first month, is needed only occasionally for the next two months, is rarely accessed afterwards, and is required to be expired after seven years. In this scenario, hot storage is the best tier to use initially, cool storage is appropriate for occasional access, and archive storage is the best tier after several months and before it is deleted seven years later.

The following sample policy manages the lifecycle for such data. It applies to block blobs in container “foo”:

  • Tier blobs to cool storage 30 days after last modification
  • Tier blobs to archive storage 90 days after last modification
  • Delete blobs 2,555 days (seven years) after last modification
  • Delete blob snapshots 90 days after snapshot creation
{
   "rules": [
     {
       "name": "ruleFoo",
       "enabled": true,
       "type": "Lifecycle",
       "definition": {
         "filters": {
           "blobTypes": [ "blockBlob" ],
           "prefixMatch": [ "foo" ]
         },
         "actions": {
           "baseBlob": {
             "tierToCool": { "daysAfterModificationGreaterThan": 30 },
             "tierToArchive": { "daysAfterModificationGreaterThan": 90 },
             "delete": { "daysAfterModificationGreaterThan": 2555 }
           },
           "snapshot": {
             "delete": { "daysAfterCreationGreaterThan": 90 }
           }
         }
       }
     }
   ]
}

Pricing

Lifecycle management is free of charge. Customers are charged the regular operation cost for the “List Blobs” and “Set Blob Tier” API calls initiated by this feature. To learn more about pricing visit the Block Blob pricing page.

Next steps

We are confident that Azure Blob Storage lifecycle management policy will simplify your cloud storage management and cost optimization strategy. We look forward to hearing your feedback on this feature and suggestions for future improvements through email at DLMFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.

Announcing new integrations with Autodesk AutoCAD for Microsoft OneDrive and SharePoint

SIMD Extension to C++ OpenMP in Visual Studio

$
0
0

In the era of ubiquitous AI applications there is an emerging demand of the compiler accelerating computation-intensive machine-learning code for existing hardware. Such code usually does mathematical computation like matrix transformation and manipulation and it is usually in the form of loops. The SIMD extension of OpenMP provides users an effortless way to speed up loops by explicitly leveraging the vector unit of modern processors. We are proud to start offering C/C++ OpenMP SIMD vectorization in Visual Studio 2019.

The OpenMP C/C++ application program interface was originally designed to improve application performance by enabling code to be effectively executed in parallel on multiple processors in the 1990s. Over the years the OpenMP standard has been expanded to support additional concepts such as task-based parallelization, SIMD vectorization, and processor offloading. Since 2005, Visual Studio has supported the OpenMP 2.0 standard which focuses on multithreaded parallelization. As the world is moving into an AI era, we see a growing opportunity to improve code quality by expanding support of the OpenMP standard in Visual Studio. We continue our journey in Visual Studio 2019 by adding support for OpenMP SIMD.

OpenMP SIMD, first introduced in the OpenMP 4.0 standard, mainly targets loop vectorization. It is so far the most widely used OpenMP feature in machine learning according to our research. By annotating a loop with an OpenMP SIMD directive, the compiler can ignore vector dependencies and vectorize the loop as much as possible. The compiler respects users’ intention to have multiple loop iterations executed simultaneously.

#pragma omp simd 
for (i = 0; i < count; i++) 
{ 
    a[i] = b[i] + 1; 
}

As you may know, C++ in Visual Studio already provides similar non-OpenMP loop pragmas like #pragma vector and #pragma ivdep. However, the compiler can do more with OpenMP SIMD. For example:

  1. The compiler is always allowed to ignore any vector dependencies that are present.
  2. /fp:fast is enabled within the loop.
  3. Loops with function calls are vectorizable.
  4. Outer loops are vectorizable.
  5. Nested loops can be coalesced into one loop and vectorized.
  6. Hybrid acceleration is achievable with #pragma omp for simd to enable coarse-grained multithreading and fine-grained vectorization.

In addition, the OpenMP SIMD directive can take the following clauses to further enhance the vectorization:

  • simdlen(length) : specify the number of vector lanes
  • safelen(length) : specify the vector dependency distance
  • linear(list[ : linear-step]) : the linear mapping from loop induction variable to array subscription
  • aligned(list[ : alignment]): the alignment of data
  • private(list) : specify data privatization
  • lastprivate(list) : specify data privatization with final value from the last iteration
  • reduction(reduction-identifier : list) : specify customized reduction operations
  • collapse(n) : coalescing loop nest

New -openmp:experimental switch

An OpenMP-SIMD-annotated program can be compiled with a new CL switch -openmp:experimental. This new switch enables additional OpenMP features not available under -openmp. While the name of this switch is “experimental”, the switch itself, and the functionality it enables is fully supported and production-ready. The name reflects that it doesn’t enable any complete subset or version of an OpenMP standard. Future iterations of the compiler may use this switch to enable additional OpenMP features and new OpenMP-related switches may be added. The -openmp:experimental switch subsumes the -openmp switch which means it is compatible with all OpenMP 2.0 features. Note that the SIMD directive and its clauses cannot be compiled with the -openmp switch.

For loops that are not vectorized, the compiler will issue a message for each of them like below. For example,

cl -O2 -openmp:experimental mycode.cpp

mycode.cpp(84) : info C5002: Omp simd loop not vectorized due to reason ‘1200’

mycode.cpp(90) : info C5002: Omp simd loop not vectorized due to reason ‘1200’

For loops that are vectorized, the compiler keeps silent unless a vectorization logging switch is provided:

cl -O2 -openmp:experimental -Qvec-report:2 mycode.cpp

mycode.cpp(84) : info C5002: Omp simd loop not vectorized due to reason ‘1200’

mycode.cpp(90) : info C5002: Omp simd loop not vectorized due to reason ‘1200’

mycode.cpp(96) : info C5001: Omp simd loop vectorized

As the first step of supporting OpenMP SIMD we have basically hooked up the SIMD pragma with the backend vectorizer under the new switch. We focused on vectorizing innermost loops by improving the vectorizer and alias analysis. None of the SIMD clauses are effective in Visual Studio 2019 at the time of this writing. They will be parsed but ignored by the compiler with a warning issued for user’s awareness. For example, the compiler will issue

warning C4849: OpenMP ‘simdlen’ clause ignored in ‘simd’ directive

for the following code:

#pragma omp simd simdlen(8)
for (i = 1; i < count; i++)
{
    a[i] = a[i-1] + 1;
    b[i] = *c + 1;
    bar(i);
}

More about the semantics of OpenMP SIMD directive

The OpenMP SIMD directive provides users a way to dictate the compiler to vectorize a loop. The compiler is allowed to ignore the apparent legality of such vectorization by accepting users’ promise of correctness. It is users’ responsibility when unexpected behavior happens with the vectorization. By annotating a loop with the OpenMP SIMD directive, users intend to have multiple loop iterations executed simultaneously. This gives the compiler a lot of freedom to generate machine code that takes advantage of SIMD or vector resources on the target processor. While the compiler is not responsible for exploring the correctness and profit of such user-specified parallelism, it must still ensure the sequential behavior of a single loop iteration.

For example, the following loop is annotated with the OpenMP SIMD directive. There is no perfect parallelism among loop iterations since there is a backward dependency from a[i] to a[i-1]. But because of the SIMD directive the compiler is still allowed to pack consecutive iterations of the first statement into one vector instruction and run them in parallel.

#pragma omp simd
for (i = 1; i < count; i++)
{
    a[i] = a[i-1] + 1;
    b[i] = *c + 1;
    bar(i);
}

Therefore, the following transformed vector form of the loop is legal because the compiler keeps the sequential behavior of each original loop iteration. In other words, a[i] is executed after a[-1], b[i] is after a[i] and the call to bar happens at last.

#pragma omp simd
for (i = 1; i < count; i+=4)
{
    a[i:i+3] = a[i-1:i+2] + 1;
    b[i:i+3] = *c + 1;
    bar(i);
    bar(i+1);
    bar(i+2);
    bar(i+3);
}

It is illegal to move the memory reference *c out of the loop if it may alias with a[i] or b[i]. It’s also illegal to reorder the statements inside one original iteration if it breaks the sequential dependency. As an example, the following transformed loop is not legal.

c = b;
t = *c;
#pragma omp simd
for (i = 1; i < count; i+=4)
{
    a[i:i+3] = a[i-1:i+2] + 1;
    bar(i);            // illegal to reorder if bar[i] depends on b[i]
    b[i:i+3] = t + 1;  // illegal to move *c out of the loop
    bar(i+1);
    bar(i+2);
    bar(i+3);
}

 

Future Plans and Feedback

We encourage you to try out this new feature. As always, we welcome your feedback. If you see an OpenMP SIMD loop that you expect to be vectorized, but isn’t or the generated code is not optimal, please let us know. We can be reached via the comments below, via email (visualcpp@microsoft.com), twitter (@visualc) , or via Developer Community.

Moving forward, we’d love to hear your need of OpenMP functionalities missing in Visual Studio. As there have been several major evolutions in OpenMP since the 2.0 standard, OpenMP now has tremendous features to ease your effort to build high-performance programs. For instance, task-based concurrency programming is available starting from OpenMP 3.0. Heterogenous computing (CPU + accelerators) is supported in OpenMP 4.0. Advanced SIMD vectorization and DOACROSS loop parallelization support are also available in the latest OpenMP standard now. Please check out the complete standard revisions and feature sets from the OpenMP official website: https://www.openmp.org. We sincerely ask for your thoughts on the specific OpenMP features you would like to see. We’re also interested in hearing about how you’re using OpenMP to accelerate your code. Your feedback is critical that it will help drive the direction of OpenMP support in Visual Studio.

 

The post SIMD Extension to C++ OpenMP in Visual Studio appeared first on C++ Team Blog.

Re-reading ASP.Net Core request bodies with EnableBuffering()

$
0
0

In some scenarios there’s a need to read the request body multiple times. Some examples include

  • Logging the raw requests to replay in load test environment
  • Middleware that read the request body multiple times to process it

Usually Request.Body does not support rewinding, so it can only be read once. A straightforward solution is to save a copy of the stream in another stream that supports seeking so the content can be read multiple times from the copy.

In ASP.NET framework it was possible to read the body of an HTTP request multiple times using HttpRequest.GetBufferedInputStream method. However, in ASP.NET Core a different approach must be used.

In ASP.NET Core 2.1 we added an extension method EnableBuffering() for HttpRequest. This is the suggested way to enable request body for multiple reads. Here is an example usage in the InvokeAsync() method of a custom ASP.NET middleware:

public async Task InvokeAsync(HttpContext context, RequestDelegate next)
{
    context.Request.EnableBuffering();

    // Leave the body open so the next middleware can read it.
    using (var reader = new StreamReader(
        context.Request.Body,
        encoding: Encoding.UTF8,
        detectEncodingFromByteOrderMarks: false,
        bufferSize: bufferSize,
        leaveOpen: true))
    {
        var body = await reader.ReadToEndAsync();
        // Do some processing with body…

        // Reset the request body stream position so the next middleware can read it
        context.Request.Body.Position = 0;
    }

    // Call the next delegate/middleware in the pipeline
    await next(context);
}

The backing FileBufferingReadStream uses memory stream of a certain size first then falls back to a temporary file stream. By default the size of the memory stream is 30KB. There are also other EnableBuffering() overloads that allow specifying a different threshold, and/or a limit for the total size:

public static void EnableBuffering(this HttpRequest request, int bufferThreshold)

public static void EnableBuffering(this HttpRequest request, long bufferLimit)

public static void EnableBuffering(this HttpRequest request, int bufferThreshold, long bufferLimit)

For example, a call of

context.Request.EnableBuffering(bufferThreshold: 1024 * 45, bufferLimit: 1024 * 100);

enables a read buffer with limit of 100KB. Data is buffered in memory until the content exceeds 45KB, then it’s moved to a temporary file. By default there’s no limit on the buffer size but if there’s one specified and the content of request body exceeds the limit, an System.IOException will be thrown.

These overloads offer flexibility if there’s a need to fine-tune the buffering behaviors. Just keep in mind that:

  • Even though the memory stream is rented from a pool, it still has memory cost associated with it.
  • After the read is over the bufferThreshold the performance will be slower since a file stream will be used.

The post Re-reading ASP.Net Core request bodies with EnableBuffering() appeared first on ASP.NET Blog.

Hannover Messe 2019: Azure IoT Platform updates power new, highly-secured Industrial IoT Scenarios

$
0
0

We’re proud to be showcasing at Hannover Messe once again next week. Manufacturing continues to be one of the leading industries adopting IoT for a growing set of scenarios to improve safety, efficiency, and reliability for people and devices. Every year, I get to meet with partners and customers and learn about how their needs and use cases are growing and changing, as they continue to digitize their operations and deliver on the promise of Industry 4.0. They want security more integrated into every layer, protecting data from different industrial processes and operations from the edge to the cloud. They want to enable proof-of-concepts quickly to improve the pace of innovation and learning, and then scale quickly and effectively. And they want to manage digital assets at scale, not dozens of devices and sensors. Over the last year, we’ve made several significant additions to our IoT platform to address these needs, including the launch of Azure Digital Twins and Azure Sphere and the general availability of Azure IoT Central and Azure IoT Edge. Next week at Hannover Messe, we’re introducing a set of new product capabilities and programs that make it easier for our customers to build enterprise-grade industrial IoT solutions with open standards, while ensuring security and innovation protection across cloud boundaries.

Securing IoT solutions

Securing IoT solutions requires new capabilities to protect the thousands of devices deployed on the edge. To truly secure an IoT solution, you must secure devices, their connectivity to the cloud, the services running in the cloud, and the applications built on top of them. 

At Hannover Messe, we’re thrilled to announce Azure Security Center for IoT, the worlds first comprehensive security offering for IoT.

With Azure Security Center for IoT, customers can benefit from a holistic view of their IoT security and take measures aligned with industry best practices, such as monitoring devices for open ports. The ever-evolving threat landscape requires customers to go far beyond this, by also inspecting and monitoring the security properties of devices and workloads for potential attacks. Azure has unique threat intelligence sourced from the more than 6 trillion signals that Microsoft collects every day and makes that available to customers in Azure Security Center.

Beyond the security posture management and threat protection capabilities provided in Azure Security Center many SecOps teams rely on SIEM tools for advanced hunting and threat mitigation across their entire enterprise. At RSA earlier this month Security blade in Azure IoT Hubwe announced Azure Sentinel which is the first cloud-native SIEM from a major public cloud provider. Today, we take it a step further by enhancing the capabilities of Azure Sentinel by enabling customers to combine their IoT security data with the security data from across the enterprise, to then apply analysis techniques or machine learning to identify and mitigate threats.

This announcement empowers manufacturers to reduce the attack surface of Azure IoT solutions running across all their operations, remediate issues before they become serious, and apply analytics and machine learning to prevent attacks. Azure is the first major public cloud provider to deliver the breadth of these security innovations for end-to-end IoT solutions and this announcement marks an important leap forward as we offer new security layers for your IoT workloads. 

We also want to continue driving innovation in IoT, which requires us to take measures to protect our customers’ and partners’ innovations. That’s why today we’re extending the Azure IP Advantage benefits to Azure customers with IoT devices connected to Azure, and devices that are powered by Azure Sphere and Windows IoT. Thyssenkrupp, Bühler, and MediaTek are three companies that see the benefit of added protections from IP risk as they transition into Industry 4.0 and generate value from their IoT workloads. The program offers customers uncapped indemnification coverage for Azure Sphere and Windows IoT and access to 10,000 Microsoft patents that are available to Azure customers and can be critical in deterring competitors from suing for patent infringement. More detail about the new program is available on the Microsoft on the Issues blog.

Accelerate Industrial IoT Solutions with an Open Cloud Platform, Open Interoperability Standards and Open Source

We’ve continued to innovate by developing additional open-source components based on open interoperability standards (OPC UA) for our open cloud platform. These new components provide security management as well as performance optimization and simplify the experience for our customers. Today we’re announcing OPC Twin, which creates a digital twin for OPC UA-enabled machines, makes their information model available in the cloud, and enables machine interaction from the cloud. We’ve also extended our OPC UA security and certificate management by launching OPC Vault. OPC Vault automates security management by creating, managing, and revoking certificates for OPC UA-enabled machines on a global scale. Both components simplify their integration into existing or new cloud applications by providing REST interfaces and are available on GitHub today. In addition, we’re excited announce enhancements to the Connected Factory solution accelerator, which now also integrates an OPC Twin dashboard. Connected Factory is designed to accelerate proof-of-concepts in Industrial IoT and additionally offers OEE data across customers’ factories via a centralized dashboard.

For Industrial IoT scenarios, time series data is a critical component to unlocking exciting opportunities to drive growth by providing operational insights in fractions of a second on a global scale. Later in the summer we will be building on our recent momentum with Azure Time Series Insights (TSI) by enabling our customers to take advantage of integrating both warm and cold path analytics into a single offering under the pay-as-you-go version that was announced in December of last year. This provides customers a more predictable, cost-effective, and flexible analytics platform for their Industrial IoT scenarios. We are also working towards delivering a wide variety of analytics scenarios by offering support for storage tier configuration based on retention and released enhancements to the user experience.

Build enterprise-grade Industrial IoT solutions across cloud boundaries

Last year we announced Azure IoT Hub on Azure Stack in limited preview to meet industrial manufacturers’ latency and connectivity requirements, as well as their specific regulatory and compliance policies. Customers that are working with us are benefiting from running their IoT solutions on a hybrid model. Rockwell Automation has partnered with us to build IoT solutions that stretch from the intelligent cloud to the intelligent edge. It’s not uncommon to have facilities that are in remote areas or immersed in conditions that cause inconsistent network connectivity. Rockwell Automation is participating in the Azure IoT Hub on Azure Stack limited preview to extend a consistent solution at the edge of your production. Running IoT on Azure Stack in a hybrid model has empowered ZEISS to continue providing clients with new insights about their products, production, and processes. ZEISS spectroscopy helps clients to optimize their processes based on valuable insights about their products and production, when they need it and where they need it – thanks to smart solutions and connected technology. Their solutions for the food industry provide real-time measurement of important quality indicators, such as fat, moisture, and salt content directly on the production line. This data is then sent to the cloud, allowing production managers to optimize quality almost immediately, while enabling a more efficient way of using raw materials and energy.

It’s an exciting time to be a manufacturer, when you have the power of data and connected devices at your fingertips to drive real-time insights and actions. We hope to see you at Hannover Messe where you can see and learn more about these announcements as well as see partners and customers’ showcasing these solutions. We will be at the Digital Factory Fair in Hall 7 – stop by and meet us.

Closing the skills gap in manufacturing with Microsoft 365

$
0
0

In this era of digital transformation, manufacturers must reimagine the roles, skills, and tools to transform how they work. To help manufacturers with their digital transformation, we’re enabling new ways to work with Microsoft 365 for Firstline Workers to learn, communicate, and collaborate more effectively.

The post Closing the skills gap in manufacturing with Microsoft 365 appeared first on Microsoft 365 Blog.

Get an official service issue root cause analysis with Azure Service Health

$
0
0

After you experience a Microsoft Azure service issue, you likely need to explain what happened to your customers, management, and other stakeholders. That’s why Azure Service Health provides official incident reports and root cause analyses (RCAs) from Microsoft.

Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance. In this blog, we’ll cover how you can use Azure Service Health’s health history to review past health issues and get official root cause analyses (RCAs) to share with your internal and external stakeholders.

Review past health issues and get official root cause analyses (RCAs)

You can see 90 days of history about past incidents, maintenance, and health advisories in Azure Service Health’s “Health history” section. This is a tailored view of the Azure Activity Log provided by Azure Monitor.

Screenshot of Service Health History

If you experienced downtime, your internal or external stakeholders might expect an official report or RCA. As soon as they become available, RCAs can be found under any incident. Meanwhile, you can download and share Microsoft’s issue summary as a PDF.

Learn more about getting downloadable explanations in the Service Health documentation.

Get started with Azure Service Health

Azure Service Health provides a large amount of information about incidents, planned maintenance, and other health advisories that could affect you. While you can always visit the dashboard in the portal, the best way to stay informed and take action is to set up Azure Service Health alerts. With alerts, as soon as we publish any health-related information, you’ll get notified on whichever channels you prefer, including email, SMS, push notification, webhook into ServiceNow, and more. We’ll also notify you when we publish RCAs.

Screenshot display of the creation of a new servcice health alert

Next steps

Review your Azure Service Health dashboard and set up alerts in the Azure portal. If you need help getting started visit the Azure Service Health documentation. We always welcome feedback. Submit your ideas at Azure Service Health feedback forum or email us with any questions and comments at servicehealth@microsoft.com.


Announcing the public preview of Data Discovery & Classification for Azure SQL Data Warehouse

$
0
0

Today we’re announcing the public preview of Data Discovery & Classification for Azure SQL Data Warehouse, an additional capability for managing security for sensitive data. Azure SQL Data Warehouse is a fast, flexible, and secure cloud data warehouse tuned for running complex queries fast and across petabytes of data.

While it’s critical to protect the privacy of your customers and other sensitive data, it becomes unmanageable to discover, classify, and protect such sensitive data as your businesses and data assets are growing rapidly. The Data Discovery & Classification feature that we’re introducing natively with Azure SQL Data Warehouse helps alleviate this pain-point. The overall benefits of this capability are:

  • Meeting data privacy standards and regulatory compliance requirements such as General Data Protection Regulation (GDPR).
  • Restricting access to and hardening the security of data warehouses containing highly sensitive data.
  • Monitoring and alerting on anomalous access to sensitive data.
  • Visualization of sensitive data in a central dashboard on the Azure portal.

What is Data Discovery & Classification?

Data Discovery & Classification introduces a set of advanced capabilities aimed at protecting data and not just the data warehouse itself.

  • Auto-discovery and recommendations – Underlying classification engine automatically scans your data warehouse and identifies columns containing potentially sensitive data. It also provides you an easy way to review and apply appropriate classification recommendations through the Azure portal.
  • Classification/Labeling – Sensitivity classification labels tagged on the columns can be persisted in the data warehouse itself.
  • Reporting Data classification can be centrally viewed on a dashboard in the Azure portal. In addition, you can download a report in Microsoft Excel format for compliance and auditing purposes.
  • Monitoring/Auditing – Auditing has been enhanced to log sensitivity classifications or labels of the actual data that were returned by the query. This would enable you to gain insights on who is accessing sensitive data.

Gif image displaying a Data-Discovery & Classification overview

How does Data Discovery & Classification work?

The Data Discovery & Classification capability have built-in automated classification engines that identify columns containing potentially sensitive data and provides a list of recommendations for you to choose from. This data can be persisted as sensitivity metadata on top of the columns directly in the data warehouse. You can manually classify and label your columns. You can also define custom labels and information types in addition to those generated by the system.

You can also use T-SQL to add, remove, and retrieve column classifications across all tables in your data warehouse:

Additionally, Azure SQL Data Warehouse engine utilizes the column classifications to determine the sensitivity of query results. Combined with Azure SQL Data Warehouse Auditing, this enables you to audit the sensitivity of the actual data being returned by queries.

This capability is now available in all Azure regions as part of Advanced Data Security and including Vulnerability Assessment and Threat Detection. For more information on Data Discovery & Classification in Azure SQL Data Warehouse, refer to our online documentation “Azure SQL Database Data Discovery & Classification.”

Azure SQL Data Warehouse continues to lead in the areas of security, compliance, privacy, and auditing. Check out our latest videos on Azure SQL Data Warehouse security related topics:

Next steps

Azure Marketplace new offers – Volume 34

$
0
0

We continue to expand the Azure Marketplace ecosystem. From February 16 to February 28, 2019, 50 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

Analytics Zoo- A unified Analytics AI platform

Analytics Zoo: A unified Analytics + AI platform: Analytics Zoo provides a unified analytics and AI platform that unites Spark, TensorFlow, Keras, and BigDL programs into an integrated pipeline. The pipeline can then transparently scale out to a large Hadoop/Spark cluster.

Blender 3D On Windows Server 2016

Blender 3D On Windows Server 2016: Studios around the world use Blender as their go-to 3-D software for remodeling, rendering, animation, video editing, compositing, texturing, and more. Apps4Rent helps you deploy Blender on Microsoft Azure.

 CIS CentOS 7.5 Benchmark L1

CIS CentOS 7.5 Benchmark L1: This image of CentOS 7.5 is preconfigured by CIS to the recommendations in the associated CIS Benchmark. CIS Benchmarks are vendor-agnostic, consensus-based security configuration guides.

IBM DB2 Advanced Enterprise Server Edition 11.1

IBM DB2 Advanced Enterprise Server Edition 11.1: Install IBM DB2 Advanced Enterprise Server Edition in just a few minutes. IBM DB2 is ideal for development, test, and production infrastructure, and MidVision’s RapidDeploy is shipped for streamlined administration.

IBM DB2 Advanced Workgroup Server Edition 11.1

IBM DB2 Advanced Workgroup Server Edition 11.1: Install IBM DB2 Advanced Workgroup Server Edition in just a few minutes. IBM DB2 is ideal for development, test, and production infrastructure, and MidVision’s RapidDeploy is shipped for streamlined administration.

Kotlin Programming Language Windows Server 2012R2

Kotlin Programming Language Windows Server 2012R2: Kotlin is flexible and interoperable with other platforms and native languages, offering code sharing between JVM and JavaScript platforms. It's also tool-friendly, as any Java IDE can be chosen.

Kotlin Programming Language Windows Server 2016

Kotlin Programming Language Windows Server 2016: Kotlin is flexible and interoperable with other platforms and native languages, offering code sharing between JVM and JavaScript platforms. It's also tool-friendly, as any Java IDE can be chosen.

MayaNAS Cloud Enterprise

MayaNAS Cloud Enterprise: MayaNAS Cloud is a full-featured, enterprise-grade, software-defined storage solution that provides high-performance unified file and block services using cloud-native disks and object storage.

MayaScale Cloud Data Platform

MayaScale Cloud Data Platform: MayaScale Cloud Data Platform offers high-performance shared storage using NVMe (non-volatile memory express) fabric over TCP and iSCSI protocols.

Qorus Integration Engine 4.0 on Oracle Linux 7

Qorus Integration Engine 4.0 on Oracle Linux 7: This agile and scalable platform for back-office IT business process automation serves as a low-cost and low-code enterprise integration solution.

Robotic Process Automation (RPA)

Robotic Process Automation (RPA): Download and use the trial edition of Kryon Studio to experience how easy it can be to automate processes. This free trial is a useful tool for anyone looking to evaluate Kryon’s robotic process automation solutions.

XCFrontier - Virtualisation Services

XCFrontier - Virtualisation Services: XCFrontier is an innovative cloud virtualization solution for faster internet browsing that works with the Microsoft Office suite and other software applications.

Web applications

clip_image001[6]

Azure Monitor Agent for Citrix Environments: Use the power of Azure Monitor and Log Analytics with this agent for your Citrix workers, servers and desktops. You don’t need an SQL server or additional infrastructure for monitoring data.

clip_image001[7]

Azure Monitor for RDS and Windows Virtual Desktop: Monitor user experiences within Remote Desktop Services and Windows Virtual Desktop.

clip_image002[4]

Check Point CheckMe: CheckMe runs simulations that test if your security technologies are equipped to mitigate advanced threats, and it provides a comprehensive report on your security state.

clip_image003[4]

D3 Security: Rapidly validate threats with out-of-the-box security integrations and adaptable playbooks that guide your security operations platform to automated incident response.

clip_image004[4]

Discovery Hub with Azure Data Lake: Deploy the Discovery Hub application server and Azure Data Lake. Discovery Hub is a high-performance data management platform that accelerates your time to data insights.

clip_image005[4]

Forscene Edge - BYOL: The Forscene Edge is a professional two-way video transcoding engine for generating lightweight Blackbird video-editing proxies. The Blackbird proxy provides frame-accurate navigation and plays media and edits completely render-free.

clip_image006[4]

Integris Data Privacy Automation: Use Integris to discover and classify sensitive data across any system, apply data-handling policies, assess risk, and take action.

clip_image007[4]

Intel Optimized Data Science VM for Linux (Ubuntu): This preconfigured data science virtual machine comes with Python environments optimized for deep learning on Intel Xeon processors.

clip_image008[4]

Jira Service Desk Data Center: By linking Jira Service Desk with Jira Software, IT and developer teams can collaborate on one platform to fix incidents faster and push changes with confidence.

clip_image009[4]

SCOM Alert Management: SCOM Alert Management extends the capabilities of Microsoft Alert Management with automation of alert rules for the System Center Operations Manager group connected to the Log Analytics workspace.

clip_image010[4]

Security for Microsoft 365: SoftwareONE's Security for Microsoft 365 is a managed security service helping customers improve the return on their Microsoft security investments. SoftwareONE security consultants will plan, set up, enhance, and maintain threat detection.

clip_image011[4]

SIMBA Chain: SIMBA Chain's Blockchain-as-a-Service platform allows users to quickly deploy decentralized applications (dApps). These dApps allow secure, direct connections between users and providers, eliminating third parties.

Container solutions

clip_image001[10]

Decent Blockchain Node: DCT is the platform cryptographic asset on the DCore blockchain that serves as the fundamental currency for publishing and purchasing. It also funds the miners and seeders who maintain the platform. This image contains the DCore node and CLI wallet.

Consulting services

clip_image002[6]

Active Directory Assessment: 4-Week Assessm. (GB): This assessment by Dots. will review your Active Directory environment, architecture, DNS configuration, backup policy, and administrative procedures to provide audit findings and best-practice recommendations.

clip_image004[6]

AD Connect: 1 Day Implementation: CDW will assist your organization in creating storage accounts in Microsoft Azure for use with an on-premises, cloud-enabled storage appliance, resulting in a hybrid cloud storage solution.

clip_image006[6]

Airnet Azure Foundations: 2-day Implementation: Migrate to the cloud quickly and easily with an automated setup of your Azure environment using a scalable, standardized, and pre-architected framework from Airnet Group Inc.

clip_image006[7]

Airnet Systems Assessment Tool: 1-day Assessment: Review tiered budgeting options for your move to Azure based on Airnet Group Inc.'s detailed reports of server core level inventory, cost, and performance data from your entire IT infrastructure.

clip_image008[6]

App Modernization: 2 Hour Briefing: Oakwood Systems Group will review your business drivers, establish goals for modernization, discuss approaches, provide recommendations for Azure services, and help you develop a better understanding of the options available.

clip_image010[6]

Application Modernization: 2 Week Assessment: RDA will work with your technical team to collect data about identified applications and then design, plan, and document key considerations for an application modernization effort using Azure.

clip_image012

Azure AD Single Sign-On (SSO): 2-Day Implementation: Mismo Systems LLP will configure Azure Active Directory Single Sign-On, enabling you to centrally manage users' access across Software-as-a-Service applications.

clip_image014

Azure Assessment: 1-Week Assessment: Tallan will work with your team to review your on-premises and cloud environments, cover best practices for deployment and app modernization, and provide documentation and recommendations.

clip_image008[7]

Azure DevOps: 1 Hour Briefing: This comprehensive briefing by Oakwood Systems Group will help you develop a better understanding of how to implement Azure DevOps within your business, no matter how big your IT department or what tools you’re using.

clip_image016

Azure Disaster Recovery: 1-Day Workshop: You will walk away with a comprehensive understanding of Azure Backup and Azure Site Recovery. In many cases, a partial or complete implementation can be achieved in this workshop from InsITe Business Solutions.

clip_image018

Azure Migration 6-Wk Assessment & Implementation: TapLogic’s Azure Platform Migration Service gives service providers in the agricultural industry the tools and resources to develop a plan for adopting the best Microsoft Azure solution for their business needs.

clip_image004[7]

Azure Site Recovery: 3-Day Implementation: CDW will install and configure Azure Site Recovery, establishing a Disaster Recovery-as-a-Service solution that allows you to replicate up to five of your virtual machines to Microsoft Azure.

clip_image004[8]

Azure Storage for Backup: 1-Day Implementation: The Microsoft Azure Storage for Backup engagement by CDW will provide best practices and knowledge transfer in demonstrating and maximizing the benefits of utilizing Azure Storage.

clip_image020

CCG Customer Intelligence for Retail: In this engagement, CCG Analytics will implement Customer Intelligence, an analytics platform developed for mid-market retailers who want to elevate the customer experience and dominate the retail omnichannel.

clip_image022

Cloud Aware - Events: 5 Week Implementation: This implementation by Meylah Corporation involves Cloud Aware - Event in a Box, a collection of event planning resources to simplify the process of the customer acquisition.

clip_image024

Cloud Migration Assessment - 6 Days Assessment: Incremental Group’s Cloud Migration Assessment is carried out by one of our senior cloud engineers and will involve compiling a complete review and cloud migration proposal for your organization.

clip_image004[9]

Connecting with S2S VPN: 1-Day Implementation: CDW will assist you in configuring Azure to allow connectivity between your Azure tenant resources and on-premises resources via a site-to-site VPN.

clip_image026

Data Compliance Monitoring - 1 Hour Briefing: Discover how you can automate your data compliance and governance strategy by leveraging Azure, Azure Cosmos DB, and Brilliant IG. Brilliant IG, by CTO Boost, is an automated compliance monitoring platform on Azure.

clip_image028

Data Science Discovery Pack: 2-wk Assessment: Elastacloud combines the delivery of a data architecture blueprint using the latest Azure platform tools and services with an innovative data science work package.

clip_image030

ERP to Azure Migration: 2 Week Implementation: DXC will provide a streamlined migration for organizations desiring to move their Dynamics GP, Dynamics SL, or Dynamics NAV solution to Azure Infrastructure-as-a-Service.

clip_image032

Optimized Architecture: 1-Day Workshop (Virtual): Compare Infrastructure-as-a-Service and Platform-as-a-Service hosting options to save money through the use of Azure App Service. This workshop by Dynamics Edge is intended for cloud architects and IT professionals.

clip_image034

QuickBooks DT on Azure single install: 4-hr imp: Get your existing QuickBooks desktop software running on your Azure cloud server, complete with integrated applications, in this implementation by Mendelson Consulting.

clip_image036

TCO & Cloud Readiness Assessment - 6 Wk Assessment: Ensono's assessment will involve data gathering, creation of an HCP tenant, ingestion of the initial server list, data tagging, application readiness scoring, and a presentation of the findings.

clip_image014[1]

TFS to Azure DevOps Migration: 2-Wk Implementation: Tallan will work with your team to create an Azure DevOps migration plan to be developed during the assessment portion of this implementation. From there, we will start the migration process to Azure DevOps.

clip_image008[8]

TFS to Azure DevOps: 4-week Implementation: Oakwood Systems Group's three-phase migration plan will move your on-premises Team Foundation Server (TFS) to Azure DevOps Services.

Umanis lifts the hood on their AI implementation methodology

$
0
0

Microsoft creates deep, technical content to help developers enhance their proficiency when building solutions using the Azure AI Platform. Our preferred training partners redeliver our LearnAI Bootcamps for customers around the globe on topics including Azure Databricks, Azure Machine Learning service, Azure Search, and Cognitive Services. Umanis, a systems integrator and preferred AI training partner based in France, has been innovating in Big Data and Analytics in numerous verticals for more than 25 years and has developed an effective methodology for guiding customers into the Intelligent Cloud. Here, Philippe Harel, the AI Practice Director at Umanis, describes this methodology and shares lessons learned to empower customers to do more with data and AI.

2019 is the year when artificial intelligence (AI) and machine learning (ML) are shifting from being mere buzzwords to real-world adoption and rollouts across the enterprise. This year reminds us of the cloud adoption curve a few years ago, when it was no longer an option to stay on-premises alone, but a question of how to make the shift. As you draw up plans on how to best use AI, here are some learnings and methodologies that Umanis is following.

Given the ever-increasing speed of change in technology, along with the variety of sectors and industries Umanis works in, they focused on building a methodology that could be standardized across AI implementations from project to project. This methodology follows an iterative cycle: assimilate, learn, and act, with the goal of adding value with each iteration.

The Azure platform acts as an enabler of this methodology as seen in the image below.

clip_image002

In most data and artificial intelligence (AI) projects implemented at Umanis, several trends are gaining momentum and are likely to amplify in 2019:

  • More unstructured, big, and real-time data.
  • An increased need for fast and reliable AI solutions to scale up.
  • Increasing expectations from customers.

In this blog post, we will explain how you can address these kinds of projects, and how Umanis maps their approach to the Azure offering to deliver solutions that are easy to use, operationalize, and maintain.

The 3 phases of the AI implementation methodology

1. Assimilate

In this initial phase, you can be hit by anything. From the good to the big, bad, and ugly: databases, text, logs, telemetry, images, videos, social networks, and more are flowing in. The challenge is to make sense of everything, so you can serve the next phase (Learn) successfully. By assimilating, we mean:

  • Ingest: The performance of an algorithm depends on the quality of the data. We consider “ingesting” to be checking the quality of the data, the quality of the transmission, and building the pipelines to feed the subsequent parts.
  • Store: Since the data will be used by highly demanding algorithms (I/O, processing power) that will mix data from various sources, you need to store the data in the most efficient way for future access by algorithms or data visualizations.
  • Structure: Finally, you’ll need to prepare the data for an algorithms’ consumption and execute as many transformations, preprocessing, and cleaning tasks as you can to speed up the data scientists’ activities and algorithms.

2. Learn

This is the heart of any AI project: Creating, deploying, and managing models.

  • Create: Data scientists use available data to design algorithms, train their models, and compare the results. There are two key points to this:
  1. Don’t make them wait for results! Data scientists are rare resources and their time is precious.
  2. Allow any language or combination of languages. On that perspective, Azure Databricks is a great solution as it addresses this natively by allowing different languages to be used in a single block of code.
  • Use: Once algorithms are deployed as APIs and consumed, the need for parallelization goes up. SLAs and testing the performance of the sending, processing, and receiving pipeline is crucial.
  • Refine: Refining the quality of algorithms ensures reliable results over time. The easy part of this activity is automatic re-training on a regular basis. The less obvious one is what we call the “human in the loop” activity. In short, a Power BI report showing the results of predictions that a human can re-classify quickly as needed, and the machine uses this human expertise to get better at its task.

3. Act

All of the above phases are useless unless you actually make good use of the algorithm’s added value.

  • Inform: Any mistake in code, misunderstanding in requirements, or bug can be devastating as first user impressions are crucial. Therefore, instead of a “big bang” of visualizations, start very small, iterate very quickly, and make a few key users on-board to secure adoption before widening the audience.
  • Connect: Systems that use the information from algorithms need to be plugged in. This is called RPA, IPA, or automation in general, and the architectures can vary greatly on each project. Don’t overlook the need for human monitoring of this activity. Consider the impact of the most wrong answer from an algorithm, and you will get a good feel of the need for human supervision.
  • Dialog: When dealing with human interaction, so much comes into play that to be successful, the scope of the interaction needs to be narrowed down to the actions that really add value and are not trivial. (This is not easily possible via classic interfaces.)

Conclusion

This methodology will certainly change and adapt overtime. Nevertheless, Umani has found it to be a robust way of rolling out end-to-end data and AI projects while minimizing friction and risk. By using this approach to present a Data & AI project to both customers and internal teams, everyone can get a good feeling of what activities, technologies, and challenges are involved. It’s one way to address the “Urgent need to build shared context, trust, and credibility with your team” as Satya Nadella states in his book, Hit Refresh. This methodology, is a great way to build trust in your relationships.

If you want more information about the methodology used by Umanis, you can find them at upcoming conferences in the next two months (in French) discussing this topic in Luxembourg, Paris, and Nantes.

Learn More

Learn more about the Azure Machine Learning service

Get started with a free trial of Azure Machine Learning service

Resource governance in Azure SQL Database

$
0
0

This blog post continues the Azure SQL Database architecture series where we share background on how we run the service, as described by the architects who originally created the service. The first two posts covered data integrity in Azure SQL Database and how cloud speed helps SQL Server database administrators. In this blog post, we will talk about how we use governance to help achieve a balanced system.

Allocated and governed resources

When you choose a specific Azure SQL Database service tier, you are selecting a pre-defined set of allocated resources across several dimensions such as CPU, storage type, storage limit, memory, and more. Ideally you will select a service tier that meets the workload demands of your application, however if you over or under-size your selection you can easily scale up or down accordingly.

With each service tier selection, you are also inherently selecting a set of resource usage boundaries or limits. For example, a business critical, Gen 4 database with eight cores has the following resource allocations and associated limits:

Compute size BC_Gen4_8
Memory (GB) 56
In-memory OLTP storage (GB) 8
Storage type Local SSD
Max data size (GB) 650
Max log size (GB) 195
TempDB size (GB) 256
IO latency (approximate)

Target IOPS (64KB)
1-2 millisecond (write)
1-2 millisecond (read)
40000
Log rate limits (MBps) 48
Max concurrent workers (requests) 1600
Max concurrent logins (requests) 1600
Max allowed sessions 30000
Number of replicas 4

As you increase the resources in your tier, you may also see changes in limits up to a certain threshold. Furthermore, these limits can be automatically relaxed over time, but never further restricted without penalty to the customer.

We document resource allocation by service tier and also the associated resource governance limits in the following resources:

While the resource allocation by service tier is intuitive to customers because the more you pay, the more resources you get, resource governance and boundaries has historically been less clear of a subject with customers. While we are increasing transparency around these governing mechanisms, it is important to understand the broader purposes behind resource governance in a database as a service (DBaaS). For this, we’ll talk next about what it takes to achieve a balanced system.

Providing a balanced database as a service (DBaaS)

For the context of this blog post, we define a system as balanced if all resources are sufficiently maximized without encountering bottlenecks. This balance includes an interplay of resources such as CPU, IO, memory, network paired with an application’s workload characteristics, maximum tolerated latency, and desired throughput.

With Azure SQL Database, our view of a balanced system must also take a broad and comprehensive perspective in order to meet articulated DBaaS requirements and customer expectations.

Azure SQL Database surfaces a familiar and popular database ecosystem with the intent of giving customers the following additional benefits:

  • Elasticity of scale – Customers can provision a database based on the throughput requirements of their application. As throughput requirements change, the customer can easily scale up or down.
  • Automated backups with self-service restore to any point in time – Database backups are automatically handled by the service, with log backups generally occurring every five to ten minutes.
  • High availability – Azure SQL Database supports a differentiated availability SLA with a maximum of 99.995 percent, backed by availability zone resilience to infrastructure failures.
  • Predictable performance – Customers on the same provisioned resource level always get the same performance with the same workload.
  • Predictable scalability – Customers using the hyperscale service tier can rely on predictable latency of the online scaling operations backed by a verifiable scaling SLA. This gives the customer a reliable tool to react to, changing compute capacity demands in a timely manner.
  • Automatic upgrades – Azure SQL Database is designed to facilitate transparent hardware, software upgrades, and periodic, lightweight software updates.
  • Global scale – Customers can deploy databases around the world and easily provision geographically distributed database replicas enabling regional data access and disaster recovery solutions. These solutions are backed by strong geo-replication and failover SLAs.

For the Azure SQL Database engineering team, providing a balanced DBaaS system for customers goes well beyond simply providing the purchased CPU, IO, memory, and storage. We must also honor all aforementioned factors and aim to balance these key DBaaS factors along with overall performance requirements.

The following figure shows some of the key resources that are governed within the service.

Image list of governed resources in Azure SQL DatabaseFigure 1: Governed resources in Azure SQL Database

We need to provide this balanced system in such a way that allows us to continually improve the service over time. This requirement for continual improvement implies a necessary level of component abstraction and over-arching governance. Governance in Azure SQL Database ensures that we properly balance requirements around scale, high availability, recoverability, disaster recovery, and predictable performance.

To illustrate, let’s use transaction log rate governance as an example of why we actively manage in order to provide a balanced DBaaS. Transaction log governance is a process in Azure SQL Database used to limit high ingestion rates for workloads such as bulk insert, select into, and index builds.

Why govern this type of activity? Consider the following dimensions and the impact of transaction log generation rate.

Dimension

Log generation rate impact

Database recoverability

We make guarantees around the maximum window of possible data loss based on transaction log backup frequency.

High availability

Local replicas must remain within a recoverability and availability (up-time) range that aligns with our SLAs.

Disaster recovery

Globally distributed replicas must remain within a recoverability range that minimizes data loss.

Predictable performance

Log generation rates must not over-saturate the system or create unpredictable performance.

Log rates are set such that they can be achieved and sustained in a variety of scenarios, while the overall system can maintain its functionality with minimized impact to the user load. Log rate governance ensures that transaction log backups stay within published recoverability SLAs and prevents an excessive backlog on secondary replicas. We have similar impact and interdependencies across other governed areas including CPU, memory, and data IOPs.

How we govern resources in Azure SQL Database

While we use a multi-faceted approach to governance, today we do rely primarily on three main technologies, Job Objects, File Server Resource Manager (FSRM), and SQL Server Resource Governor.

Job Objects

Azure SQL Database leverages multiple mechanisms for governing overall performance for a database. One of the features we leverage is Windows Job Objects, which allows a group of processes to be managed and governed as a unit.   We use this functionality to govern file virtual memory commit, working set caps, CPU affinity, and rate caps. We onboard new governance capabilities as the Windows team releases them.

File Source Resource Manager (FSRM)

Available in Windows Server, we use FSRM to govern file directory quotas.

SQL Server Resource Governor

A SQL Server instance has multiple consumers of resources, including user requests and system tasks. SQL Server Resource Governor was introduced to ensure fair sharing of resources and prevent out-of-control requests from starving other requests. This feature was introduced in SQL Server years ago and over time was extended to help govern several resources including CPU, physical IO, memory, and more for a SQL Server instance. We use this functionality in Azure SQL Database as well to help govern IOPs both local and remote, CPU caps, memory, worker counts, session counts, memory grant limits, and the maximum number of concurrent requests.

Beyond the three main technologies, we also created additional mechanisms for governing transaction log rate.

Configurations for safe and predictable operations

Consider all the settings one must configure for a well-tuned on-premises SQL Server instance, including database file settings, max memory, max degree of parallelism, and more. In Azure SQL Database we pre-configure several settings based on similar best practices. And as mentioned earlier, we pre-configure SQL Server Resource Governor, FSRM, and Job Objects to deliver fairness and prevent starvation. The reasoning behind this is to aim for safe and predictable operation. We can also provide varying settings for customers based on their workload and specific needs, assuming it conforms to safety limits defined for the service.

Improvements over time

Sometimes we deploy software changes that improve the performance and scalability of specific operations. Customers benefit automatically and we might exceed the defined limits and/or increase them for all customers in the future. Furthermore, as we enhance the hardware of machines, storage, and network, these benefits may also be transparently available to an application. This is because we have defined this DBaaS abstraction layer instead of just providing a specific physical machine.

Evolving governance

The Azure SQL Database engineering team regularly enhances governance capabilities used in the service. We continually review our models based on feedback and production telemetry and we modify our limits to maximize available resources, increase safety, and reduce the impact of system tasks.

If you have feedback to share, we would like to hear from you. To contact the engineering team with feedback or comments on this subject, please email SQLDBArchitects@microsoft.com.

New to Microsoft 365 in March—tools to enable teamwork and enhance security in the workplace

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>