Iphone 7 : 15 things you need to know

iPhone-7 It’s that time of the year again – the time when Apple readies its latest iPhone model for launch, and tech enthusiasts go into overdrive while trying to guess (and second guess!) all the new features and attractions that the new flagship smartphone will have. We are less than 3 weeks away from the start of pre-ordering for iPhone 7, and here is a roundup of all things of note about the eagerly anticipated new Apple smartphone:

 

  1. Mark the dates – Traditionally, September has been the month when new iPhone models are launched (at least, from the days of iPhone 5). This year, there is not going to be any exception. Professional Apple software and iOS app developers have confirmed that iPhone 7 will be launched on the 16th of September, with pre-orders being taken from September 9. The official announcement should come a couple of days before that, on the 5th, 6th or 7th of September.
  2. Dual camera lens for the phablet – Huawei P9 had it, LG G5 had it, and Apple looks like it will finally implement dual camera lens in its upcoming flagships. It should be noted that the feature is more likely to be present on iPhone 7 Plus – the phablet – and not on the iPhone 7. The dual camera will allow users to merge two separate shots, and create crisper, better visual effects. Digital zooming, something that the iPhones do not do as well as some Android devices, will receive a boost as well.
  3. iPhone 7 will look a lot like iPhone 6 – And that is precisely why there are rumours that this year’s handset might be called ‘iPhone 6SE’, and not ‘iPhone 7’. 2016 was originally supposed to be a ‘tock’ year in the ‘tick-tock’ Apple release cycle (the ‘tick’ years being the ones when the ‘S’ variants are launched). However, reports suggest that, Apple is ditching its 2-year redesign cycle in favour of a 3-year cycle. The move makes sense, since the percentage of iPhone users who upgrade their handset every year is very low. Yes, the iPhone 7/iPhone 6SE will have certain interesting changes in its form-factor, but don’t expect something radically different from iPhone 6.
  4. Goodbye, Home button? – Those who make mobile apps and software on a professional level seem almost sure about this. In the latest range of Macbooks, the trackpad responds to pressure (i.e., haptic feedback). The older, movable trackpads have given way – and the laptop-using experience has become smoother due to this. Apple Inc. seems all set to do the same with the upcoming iPhone 7 model. The good old ‘Home’ button will be replaced by a touch-capacitative button which will work on haptic feedback. Presumably, the new Home button will do more than just unlock the iPhone.
  5. iPhone 7 might be water-resistant –  This is no more than an outside chance, but Apple is certainly working on making waterproof smartphone models. With many Android phones being water-resistant from as early as 2012, it is high time that the Cupertino tech giant adopted this technology as well. If the iPhone 7 is indeed water-resistant, it will add to the longevity factor of the device. And that’s a prime concern for end-users, after all!
  6. iPhone 7 to start from 32GB – This one is pretty much certain. iPhone app developers and general smartphone enthusiasts have reported that the base model of iPhone 7 will have 32GB storage space. This, obviously, means that the line of 16GB iPhones will be discontinued. With Apple phones not having SD card support, this tweak also makes a lot of sense. People will now be able to take more photos and share them all in their devices, without having to worry about running out of storage.

Note: There is a possibility that iPhone 7 will also have a 256GB variant. If Apple indeed releases it, the model will be priced at a significantly higher level than the models with lower built-in storage.

  1. Three versions instead of Two? – There was the iPhone 6 & the iPhone 6 Plus, and the iPhone 6S and the iPhone 6S Plus. This year, Apple might just spring a surprise by launching a premium-range iPhone 7 Pro model (in addition to iPhone 7 and iPhone 7 Plus). It will be designed on the lines of the iPad Pro, and should be compatible with the already much-talked-about Smart Connector. Certain leaks of the touted iPhone 7 Pro have already been shared by sources like uSwitch.
  2. iPhone 7 in at least one more colour – Earlier this month, a Facebook post from China Unicorn – a major carrier partner of Apple Inc. – showcased an all-new ‘deep blue’ coloured smartphone. Since then, rumours have now shifted towards the probability of iPhone 7 being available in ‘Space Black’ – the colour that has already been used in Apple Watch. There have not been much movement on the colour front since the arrival of the ‘Rose Gold’ iPhone, and the news about the new colours have fueled the excitements around iPhone 7 further.
  3. iOS 10 to arrive with iPhone 7 – Okay, this one is stating the obvious. iOS 10 beta 6 has already been seeded to iPhone app development experts, and the fifth public beta of the platform has also been released. The final stable release of iOS 10 will be on iPhone 7. There is a fairly large number of interesting new features and improvements in iOS 10

Note: Apple will be desperately hoping to avoid a rehash of the iOS 8 fiasco. That update, which debuted on iPhone 6, had many glitches – and became stable only after the 8.4 update.

     10. The headphone jack is likely to be gone – This has been a constant buzz on various leading online Apple sources and mobile app development forums. Probably in a bid to make iPhone 7 slimmer than its predecessors, Apple will do away with the conventional 3.5 mm headphone jack (the one with all of us are so familiar with). In its place, there will be a Bluetooth-based solution and/or a Lightning Port for users. Users will be able to plug-in their EarPods to the Lightning Connector, for listening to music. What’s more, there will be an option for charging iPhone 7 while playing music. The news of the headphone jack being ditched has not been universally kindly received, and there has even been a petition to keep it – but it seems that Apple will be launching new audio solutions in the new flagship.

11. Processor and RAM – The iPhone 6S has the 2GB Samsung LPDDR4 RAM. Tim Cook and his team are looking to take things up to the next level this year, by giving iPhone 7 a 3GB RAM. This will be the biggest RAM space in the history of iPhones, and the soon-to-be-released handset will also have the breakthrough A10 chip (by TSMC or Samsung), along with the M10 co-processor. The bigger RAM and the faster-than-ever A10 chip should make iPhone 7 a really powerful device.

   12. Better battery performance – Neither Apple nor Google have quite cracked the smartphone battery puzzle till now, but both are trying their level best. The improved battery of iPhone 7 is the latest endeavour in this regard. According to a recent report, the upcoming Apple flagship phone will ship with a 3100 mAh battery – nearly 13% more powerful than the battery present in iPhone 6S Plus. The battery capacity of iPhone 7 should be in the region of 7.05 Wh (Watt-hours), which is also about 6.5% more than that of iPhone 6S.

    13. Wireless charging – The chances of iPhone 7 having an wireless charging option cannot be ruled out. The company’s arch-rival, Google, has already introduced this feature in the Galaxy S7 phones. A fairly large number of new prototypes – each with a novel feature (USB-C connector, screen fingerprint scanner, multi Force Touch, wireless charging), and it is difficult to say which (or how many) of these prototypes will actually be implemented.

Note: The Lightning Connector, if present in iPhone 7, should serve as a suitable tool for wireless charging.

    14. A bezel-less display screen – The screen resolution of iPhone 6 is 1334×750 – a fairly weak figure when compared with the 4K displays that many high-end Android phones have. It can be reasonably expected that Apple will address this, by going back to glass-on-glass display – which will enhance the resolution level. The display will have no bezel, according to a report from DigiTimes. The remodeled screen of iPhone 7 will go well with its new, haptic feedback-powered ‘Home’ tab.

     15. The pocket pinch – Apple generally prices its new flagship iPhones in the same range as the preceding ‘S’-variant, while decreasing the price of the latter somewhat. This convention will be followed this year too. The price tag of iPhone 7 will be roughly similar ( ~$649 for the lowest storage model) to what the iPhone 6S costs now. Of course, the ‘lowest storage model’ here refers to the 32GB device, since there will be no 16GB iPhone 7.

The iPhone 7 is reported to be marginally slimmer than the iPhone 6S (0.28’ vs 0.282’). The phone camera will also have additional sensors, taking up the overall camera functionality. This year’s new iPhone is more likely to be some sort of an incremental update, with bigger changes coming along in 2017.

After all, 2017 marks ten years of the iPhone – and Apple won’t miss the opportunity to surprise fans on the occasion.

 What do you think of the Iphone 7 ?



Fresh installed Win7 WinUpdates takes forever *FIX*

We’ve heard a number of people report that Win7, when freshly installed, take a long time and a lot of memory to apply the first round of updates. The ‘check for updates’ part of the sequence taking more than 30 minutes … in some cases several days!

This is due to a bug* in Windows Update (WU), which causes the processing (“checking” or “searching”)  stage to take far too long, when there are a great many updates to process. The bug is resolved with the new WU client delivered in the KB3172605 update. However, the bug still applies while you are installing KB3172605, making it take a long time … unless you use this procedure.

The procedure below is the result of a lot of experimentation. It is the speediest way we have yet found, for getting Windows fully updated.

A trick we have learned:
  1. Download the appropriate (x64 or x86) versions of these three updates: KB3020369KB3172605, and KB3125574.
  2. Open an elevated PowerShell prompt and run the following commands, which will allow the next updates to install quickly:
    1. stop-service wuauserv
    2. remove-item c:windowssoftwaredistributionWuRedir
  3. Double-click and run the KB3020369 update (previously downloaded). Should take less than 2 minutes to run, and will not require a reboot.
  4. Now double-click the KB3172605 update you previously downloaded. Follow the prompts. Reboot when it says to. (This step should take about 1 minute).
  5. Double-click and run the KB3125574 update (previously downloaded). Should take about 12 minutes to run. It will require a reboot that takes 5 minutes to complete. 
  6. Begin WU (Windows Update) after completing the above steps. A list of 60+ available updates should be returned within 5 minutes.
  7. Finish updating normally … rebooting when it says to. You will probably need to reboot and re-check for updates at least two more times.

In my own tests, using this method, I was able to take a Win7 SP1 Ultimate (64bit) system from fresh installed to fully updated in about 1.5 hours. The WuRedir directory (which I remove in step 2) will be rebuilt by Windows along the way, and the WU service will be restarted on its own.

For other WU issues, start with this omnibus troubleshooting article.
Some folks advocate using wsusoffline, which requires some preparation work. Of you may be interested in this PowerShell Windows Update Module. Or this other one called PSWU.
*Microsoft have not formally said much about this bug. I was tipped off when I heard Microsoft Premier Field Engineer Clint Huffman talk about it in this podcast.


Reapply a GPO that is configured to apply once

When creating a Group Policy Preference you can configure it to only apply once. The exact wording is “Apply once and do not reapply”. But when you are implementing such a GPP you most likely want to test the setting prior moving it into production. So here’s a brief explanation how to reapply a GPP when it’s configured to apply once.

The below screen shot illustrates a GPP that is configured to write a registry key to HKLMSoftwareDemoRunIT with the value set to True.

image

When applying the GPP on a client, the registry settings are created.

image

But when deleting the registry key, the settings will not be re-applied when GPPs are processed the next time. So how does the GPP know it has already processed once? The answer is because it’s stored within the registry. When we look at HKEY_LOCAL_MACHINESOFTWAREMicrosoftGroup PolicyClientRunOnce we find one or multiple GUIDs from the GPP that are configured to only apply once.

image

The next challenge is to find out what GUID corresponds to what GPP setting. To find out we go back to the Group Policy Management console, select the GPP and select “Display Xml”.

image

image

The “FilterRunOnce id” is the GUID stored within the registry.

image

So if we want to reapply a setting that is configured to only apply once, we just delete the GUID from the registry and run gpupdate.

That’s it.


VMware Update : ESXi-6.0.0-20160804001-standard

Imageprofile ESXi-6.0.0-20160804001-standard (Build 4192238) includes the following updated VIBs:

Name Version Vendor Summary Category Severity Bulletin
esx-base 6.0.0-2.43.4192238 VMware Updates the ESX 6.0.0 esx-base bugfix critical ESXi600-201608401-BG
esx-ui 1.4.0-3959074 VMware VMware Host Client unknown unknown ESXi600-201608404-BG
misc-drivers 6.0.0-2.43.4192238 VMware Updates the ESX 6.0.0 misc-drivers bugfix important ESXi600-201608405-BG
net-vmxnet3 1.1.3.0-3vmw.600.2.43.4192238 VMware Updates the ESX 6.0.0 net-vmxnet3 bugfix important ESXi600-201608402-BG
tools-light 6.0.0-2.43.4192238 VMware Updates the ESX 6.0.0 tools-light bugfix important ESXi600-201608403-BG
vsan 6.0.0-2.43.4097166 VMware ESXi VSAN unknown unknown ESXi600-201608401-BG
vsanhealth 6.0.0-3000000.3.0.2.43.4064824 VMware Updates the ESX 6.0.0 vsanhealth bugfix important ESXi600-201608401-BG

 

Source : https://esxi-patches.v-front.de/

IOS 9.3.4 is available now

Apple has released iOS 9.3.4 download for iPhone, iPad, iPod touch users with important and critical security fixes. Here are the direct download links.

iOS 9.3.4 Is An Extremely Crucial Update, According To Apple

Pretty much out of nowhere Apple pushed out an update for iOS devices today. It’s pretty apparent that the release focuses on none other than security, which means it’s an important update to have. And we’re quite certain this update will patch the iOS 9.3.3 jailbreak, therefore please stay away from it if you’re looking to liberate your device.

The complete changelog of the update is as follows:

IMG_0040

There are two routes you can take to update to iOS 9.3.4 – over the air and iTunes. The OTA route is quick and easy and takes no more than a few minutes to follow up. Simply connect your device to a working WiFi network, navigate to Settings > General > Software Update, and tap on the ‘Download and Install’ button when iOS 9.3.4 download pops up.

The iTunes route is also available and completely optional. This ensures that you get the maximum performance out of a particular update but with the expense of losing all your files and data. Of course, you do have the option to backup everything using iTunes or iCloud, but the hassle of restoring everything back is well, a hassle.


Downgrade iOS 9.3.4 To iOS 9.3.3 – iPhone, iPad Tutorial

If you’re taking the iTunes route to install iOS 9.3.4 download, then the direct links for the IPSW files are embedded below.

iOS 9.3.4 final IPSW download links for iPhone:

iOS 9.3.4 final IPSW download links for iPad:

iOS 9.3.4 final IPSW download links for iPod touch:

Remember, if you care about your jailbreak, do not update to iOS 9.3.4. It will highly likely lay things to rest and you’ll be left with no jailbreak apps or tweaks to play around with.


The top anti-ransomeware website you should know about.

 locky-ransomware-FB

Being hit by any kind of malware is nasty, but ransomware packs an extra-tough punch because it locks you out of your own data. We’ve shown ways to protect yourself from ransomware, and it’s important to stay vigilant in the fight against these terrible attacks.

Now, there’s a site that everyone should visit to learn about ransomware, and it’s called NoMoreRansom.org. Sponsored by Kaspersky and Intel Security, the site aims to be a resource for anyone to learn about ransomware, as well as to help people affected by the infection get their stuff back if possible.

 

The site includes an FAQ section on ransomware, links to information on specific instances of ransomware, and advice on how to prevent an attack if you’re just looking for information. Should you be visiting because you’ve been hit by ransomware, the site provides a feature called Crypto Sheriff.

This page allows you to upload two encrypted files, which are then checked by the system to see if they have the solution available to decrypt them. Of course, with ransomware changing all the time, there’s no guarantee that your files will be recoverable with this method, but it’s certainly worth a shot. There’s also a place to report a crime if you’ve been hit by a ransomware attack.

Above all, the site reminds you that you should not pay for ransomware, because it lets the criminals win and encourages this activity further. Hopefully, everyone can learn something from this page and help fight ransomware going forward.


Google’s new protocol moving web tcp to udp

Google’s QUIC protocol: moving the web from TCP to UDP

Mattias Geniar, Saturday, July 30, 2016 – last modified: Tuesday, August 2, 2016

The QUIC protocol (Quick UDP Internet Connections) is an entirely new protocol for the web developed on top of UDP instead of TCP.

Some are even (jokingly) calling it TCP/2.

I only learned about QUIC a few weeks ago while doing the curl & libcurl episode of the SysCast podcast.

The really interesting bit about the QUIC protocol is the move to UDP.

Now, the web is built on top of TCP for its reliability as a transmission protocol. To start a TCP connection a 3-way handshake is performed. This means additional round-trips (network packets being sent back and forth) for each starting connection which adds significant delays to any new connection.

tcp_3_way_handshake

(Source: Next generation multiplexed transport over UDP (PDF))

If on top of that you also need to negotiate TLS, to create a secure, encrypted, https connection, even more network packets have to be sent back and forth.

tcp_3_way_handshake_with_tls

(Source: Next generation multiplexed transport over UDP (PDF))

Innovation like TCP Fast Open will improve the situation for TCP, but this isn’t widely adopted yet.

UDP on the other hand is more of a fire and forget protocol. A message is sent over UDP and it’s assumed to arrive at the destination. The benefit is less time spent on the network to validate packets, the downside is that in order to be reliable, something has to be built on top of UDP to confirm packet delivery.

That’s where Google’s QUIC protocol comes in.

The QUIC protocol can start a connection and negotiate all the TLS (HTTPs) parameters in 1 or 2 packets (depends on if it’s a new server you are connecting to or a known host).

udp_quic_with_tls

(Source: Next generation multiplexed transport over UDP (PDF))

This can make a huge difference for the initial connection and start of download for a page.

Why is QUIC needed?

It’s absolutely mind boggling what the team developing the QUIC protocol is doing. It wants to combine the speed and possibilities of the UDP protocol with the reliability of the TCP protocol.

Wikipedia explains it fairly well.
As improving TCP is a long-term goal for Google, QUIC aims to be nearly equivalent to an independent TCP connection, but with much reduced latency and better SPDY-like stream-multiplexing support.

If QUIC features prove effective, those features could migrate into a later version of TCP and TLS (which have a notably longer deployment cycle).

QUIC
There’s a part of that quote that needs emphasising: if QUIC features prove effective, those features could migrate into a later version of TCP.

The TCP protocol is rather highly regulated. Its implementation is inside the Windows and Linux kernel, it’s in each phone OS, … it’s pretty much in every low-level device. Improving on the way TCP works is going to be hard, as each of those TCP implementation needs to follow.

UDP on the other hand is relatively simple in design. It’s faster to implement a new protocol on top of UDP to prove some of the theories Google has about TCP. That way, once they can confirm their theories about network congestion, stream blocking, … they can begin their efforts to move the good parts of QUIC to the TCP protocol.

But altering the TCP stack requires work from the Linux kernel & Windows, intermediary middleboxes, users to update their stack, … Doing the same thing in UDP is much more difficult for the developers making the protocol but allows them to iterate much faster and implement those theories in months instead of years or decades.

Where does QUIC fit in?

If you look at the layers which make up a modern HTTPs connection, QUIC replaces the TLS stack and parts of HTTP/2.

The QUIC protocol implements its own crypto-layer so does not make use of the existing TLS 1.2.

tcp_udp_quic_http2_compared

It replaces TCP with UDP and on top of QUIC is a smaller HTTP/2 API used to communicate with remote servers. The reason it’s smaller is because the multiplexing and connection management is already handled by QUIC. What’s left is an interpretation of the HTTP protocol.

TCP head-of-line blocking

With SPDY and HTTP/2 we now have a single TCP connection being used to connect to a server instead of multiple connections for each asset on a page. That one TCP connection can independently request and receive resources.

spdy_multiplexed_assets

(Source: QUIC: next generation multiplexed transport over UDP)

Now that everything depends on that single TCP connection, a downside is introduced: head-of-line blocking.

In TCP, packets need to arrive in the correct order. If a packet is lost on its way to/from the server, it needs to be retransmitted. The TCP connection needs to wait (or “block”) on that TCP packet before it can continue to parse the other packets, because the order in which TCP packets are processed matters.

spdy_multiplexed_assets_head_of_line_blocked

(Source: QUIC: next generation multiplexed transport over UDP)

In QUIC, this is solved by not making use of TCP anymore.

UDP is not dependent on the order in which packets are received. While it’s still possible for packets to get lost during transit, they will only impact an individual resource (as in: a single CSS/JS file) and not block the entire connection.

quic_multiplexing

(Source: QUIC: next generation multiplexed transport over UDP)

QUIC is essentially combining the best parts of SPDY and HTTP2 (the multiplexing) on top of a non-blocking transportation protocol.

Why fewer packets matter so much

If you’re lucky enough to be on a fast internet connection, you can have latencies between you and a remote server between the 10-50ms range. Every packet you send across the network will take that amount of time to be received.

For latencies < 50ms, the benefit may not be immediately clear.

It’s mostly noticeable when you are talking to a server on another continent or via a mobile carrier using Edge, 3G/4G/LTE. To reach a server from Europe in the US, you have to cross the Atlantic ocean. You immediately get a latency penalty of +100ms or higher purely because of the distance that needs to be traveled.

network_round_trip_europe_london

(Source: QUIC: next generation multiplexed transport over UDP)

Mobile networks have the same kind of latency: it’s not unlikely to have a 100-150ms latency between your mobile phone and a remote server on a slow connection, merely because of the radio frequencies and intermediate networks that have to be traveled. In 4G/LTE situations, a 50ms latency is easier to get.

On mobile devices and for large-distance networks, the difference between sending/receiving 4 packets (TCP + TLS) and 1 packet (QUIC) can be up to 300ms of saved time for that initial connection.

Forward Error Correction: preventing failure

A nifty feature of QUIC is FEC or Forward Error Correction. Every packet that gets sent also includes enough data of the other packets so that a missing packet can be reconstructed without having to retransmit it.

This is essentially RAID 5 on the network level.

Because of this, there is a trade-off: each UDP packet contains more payload than is strictly necessary, because it accounts for the potential of missed packets that can more easily be recreated this way.

The current ratio seems to be around 10 packets. So for every 10 UDP packets sent, there is enough data to reconstruct a missing packet. A 10% overhead, if you will.

Consider Forward Error Correction as a sacrifice in terms of “data per UDP packet” that can be sent, but the gain is not having to retransmit a lost packet, which would take a lot longer (recipient has to confirm a missing packet, request it again and await the response).

Session resumption & parallel downloads

Another exciting opportunity with the switch to UDP is the fact that you are no longer dependent on the source IP of the connection.

In TCP, you need 4 parameters to make up a connection. The so-called quadruplets.

To start a new TCP connection, you need a source IP, source port, destination IP and destination port. On a Linux server, you can see those quadruplets using netstat.
$ netstat -anlp | grep ‘:443’ … tcp6 0 0 2a03:a800:a1:1952::f:443 2604:a580:2:1::7:57940 TIME_WAIT – tcp 0 0 31.193.180.217:443 81.82.98.95:59355 TIME_WAIT – …
If any of the parameters (source IP/port or destination IP/port) change, a new TCP connection needs to be made.

This is why keeping a stable connection on a mobile device is so hard, because you may be constantly switching between WiFi and 3G/LTE.

quic_parking_lot_problem

(Source: QUIC: next generation multiplexed transport over UDP)

With QUIC, since it’s now using UDP, there are no quadruplets.

QUIC has implemented its own identifier for unique connections called the Connection UUID. It’s possible to go from WiFi to LTE and still keep your Connection UUID, so no need to renegotiate the connection or TLS. Your previous connection is still valid.

This works the same way as the Mosh Shell, keeping SSH connections alive over UDP for a better roaming & mobile experience.

This also opens the doors to using multiple sources to fetch content. If the Connection UUID can be shared over a WiFi and cellular connection, it’s in theory possible to use both media to download content. You’re effectively streaming or downloading content in parallel, using every available interface you have.

While still theoretical, UDP allows for such innovation.

The QUIC protocol in action

The Chrome browser has had (experimental) support for QUIC since 2014. If you want test QUIC, you can enable the protocol in Chrome. Practically, you can only test the QUIC protocol against Google services.

The biggest benefit Google has is the combination of owning both the browser and the server marketshare. By enabling QUIC on both the client (Chrome) and the server (Google services like YouTube, Google.com), they can run large-scale tests of new protocols in production.

There’s a convenient Chrome plugin that can show the HTTP/2 and QUIC protocol as an icon in your browser: HTTP/2 and SPDY indicator.

You can see how QUIC is being used by opening the chrome://net-internals/#quic tab right now (you’ll also notice the Connection UUID mentioned earlier).

quic_net_internals_sessions

If you’re interested in the low-level details, you can even see all the live connections and get individual per-packet captures: chrome://net-internals/#events&q=type:QUIC_SESSION%20is:active.

quic_debug_packets_chrome

Similar to how you can see the internals of a SDPY or HTTP/2 connection.

Won’t someone think of the firewall?

If you’re a sysadmin or network engineer, you probably gave a little shrug at the beginning when I mentioned QUIC being UDP instead of TCP. You’ve probably got a good reason for that, too.

For instance, when we at Nucleus Hosting configure a firewall for a webserver, those firewall rules look like these.

firewall_http_https_incoming_allow

Take special note of the protocol column: TCP.

Our firewall isn’t very different from the one deployed by thousands of other sysadmins. At this time, there’s no reason for a webserver to allow anything other than 80/TCP or 443/TCP. TCP only. No UDP.

Well, if we want to allow the QUIC protocol, we will need to allow 443/UDP too.

For servers, this means opening incoming 443/UDP to the webserver. For clients, it means allowing outgoing 443/UDP to the internet.

In large enterprises, I can see this be an issue. Getting it past security to allow UDP on a normally TCP-only port sounds fishy.

I would’ve actually though this to be a major problem in terms of connectivity, but as Google has done the experiments — this turns out to not be the case.

quic_connection_statistics

(Source: QUIC Deployment Experience @Google)

Those numbers were given at a recent HTTP workshop in Sweden. A couple of key-pointers;

  • Since QUIC is only supported on Google Services now, the server-side firewalling is probably OK.
  • These numbers are client-side only: they show how many clients are allowed to do UDP over port 443.
  • QUIC can be disabled in Chrome for compliance reasons. I bet there are a lot of enterprises that have disabled QUIC so those connections aren’t event attempted.

Since QUIC is also TLS-enabled, we only need to worry about UDP on port 443. UDP on port 80 isn’t very likely to happen soon.

The advantage of doing things encrypted-only is that Deep Packet Inspection middleware (aka: intrusion prevention systems) can’t decrypt the TLS traffic and modify the protocol, they see binary data over the fire and will — hopefully — just let it go through.

Running QUIC server-side

Right now, the only webserver that can get you QUIC is Caddy since version 0.9.

Both client-side and server-side support is considered experimental, so it’s up to you to run it.

Since no one has QUIC support enabled by default in the client, you’re probably still safe to run it and enable QUIC in your own browser(s). (Update: since Chrome 52, everyone has QUIC enabled by default, even to non-whitelisted domains)

To help debug QUIC I hope curl will implement it soon, there certainly is interest.

Performance benefits of QUIC

In a 2015 blogpost Google has shared several results from the QUIC implementation.
As a result, QUIC outshines TCP under poor network conditions, shaving a full second off the Google Search page load time for the slowest 1% of connections.

These benefits are even more apparent for video services like YouTube. Users report 30% fewer rebuffers when watching videos over QUIC.
A QUIC update on Google’s experimental transport (2015)
The YouTube statistics are especially interesting. If these kinds of improvements are possible, we’ll see a quick adoption in video streaming services like Vimeo or “adult streaming services”.

Conclusion

I find the QUIC protocol to be truly fascinating!

The amount of work that has gone into it, the fact that it’s already running for the biggest websites available and that it’s working blow my mind.

I can’t wait see the QUIC spec become final and implemented in other browsers and webservers!

Update: comment from Jim Roskind, designer of QUIC

Jim Roskind was kind enough to leave a comment on this blog (see below) that deserves emphasising.
Having spent years on the research, design and deployment of QUIC, I can add some insight. Your comment about UDP ports being blocked was exactly my conjecture when we were experimenting with QUIC’s (UDP) viability (before spending time on the detailed design and architecture). My conjecture was that the reason we could only get 93% reachability was because enterprise customers were commonly blocking UDP (perchance other than what was needed for DNS).

If you recall that historically, enterprise customers routinely blocked TCP port 80 “to prevent employees from wasting their time surfing,” then you know that overly conservative security does happen (and usability drives changes!). As it becomes commonly known that allowing UDP:443 to egress will provide better user experience (i.e., employees can get their job done faster, and with less wasted bandwidth), then I expect that usability will once again trump security … and the UDP:443 port will be open in most enterprise scenarios.

… also … your headline using the words “TCP/2” may well IMO be on target. I expect that the rate of evolution of QUIC congestion avoidance will allow QUIC to track the advances (new hardware deployment? new cell tower protocols? etc.) of the internet much faster than TCP.

As a result, I expect QUIC to largely displace TCP, even as QUIC provides any/all technology suggestions for incorporation into TCP. TCP is routinely implemented in the kernel, which makes evolutionary steps take 5-15 years (including market penetration!… not to mention battles with middle-boxes), while QUIC can evolve in the course of weeks or months.

— Jim (QUIC Architect)
Thanks Jim for the feedback, it’s amazing to see the original author of the QUIC protocol respond!

 

Content retrieved from: https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp/.

GPO and Performance : Part 4

 Farm vs. Active Directory

Citrix policies, i.e. policies applying to the VDAs, can be stored in these two locations:

  • Farm (database)
  • Active Directory and Sysvol (Group Policy)

Both types of policies can be used together. Their settings are joined on the client by the VDA.

Precedence

Settings configured in Group Policy have precedence over farm settings. Settings are applied in the following order (highest priority last):

  • Local
  • Farm
  • Site
  • Domain
  • OU

Policy Refresh

Farm Policy

New or changed settings are distributed to VDAs:

  • When the VDA registers with a DDC
  • When a user logs on

These events trigger a BrokerAgent CONFIGURATION SET event. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll. This causes CitrixCseEngine.exe to process policy (see below).

Group Policy

Group Policy is updated following the regular Group Policy cycle with an additional refresh at session reconnection added by Citrix:

  • Computer startup
  • User logon
  • Background refresh
  • When triggered by gpupdate
  • Session reconnection

Citrix Group Policy Client-Side Extension (CSE)

In order to hook into Group Policy operations Citrix adds the client-side extension CitrixCseClient.dll. The Citrix CSE is configured in such a way that it is called every time Group Policy is applied. Its main task is to notify the Citrix Group Policy Engine service (see below).

In addition to that the CSE checks the following undocumented registry values in HKLMSOFTWARECitrixGroupPolicy:

  • CseIgnoreCitrixComputerPolicyTrigger
  • CseIgnoreCitrixUserPolicyTrigger
  • CseIgnoreWindowsComputerPolicyTrigger
  • CseIgnoreWindowsUserPolicyTrigger
  • CseIgnoreWindowsBackgroundComputerPolicyTrigger
  • CseIgnoreWindowsBackgroundUserPolicyTrigger

If you want to change how/when Citrix Policy is applied, those values look like a good place to start.

Citrix Group Policy Engine Service

All the important work is done by the Citrix Group Policy Engine Service (CitrixCseEngine.exe). It is notified by the local Citrix CSE (CitrixCseClient.dll) whenever a policy refresh needs to happen. It then combines Group Policy settings with farm settings, applies them and creates RSoP data. Resulting policy settings are written to the registry:

  • Computer: HKLMSOFTWAREPoliciesCitrix
  • User: HKLMSOFTWAREPoliciesCitrix<SessionID>User

In addition to generating the resulting policy values the Citrix Group Policy Engine Service creates several cache and helper files: actual policy settings are stored as GPF files in %ProgramData%CitrixCseCache. Rollback and RSoP information is written to Rollback.gpf and Rsop.gpf respectively in %ProgramData%CitrixGroupPolicy.

 

GPO and Performance : Part 3

 Filtering

Group Policy offers four ways to control where the settings defined in a Group Policy Object (GPO) are applied:

  • Organizational units (OUs)
    • Group user/computer objects in OUs
    • Link GPOs to OUs
  • Security
    • Change GPO security so that the GPO applies to specific groups
    • Required permissions: read + apply group policy
    • Works not only for users, but also for computer accounts
  • WMI filters
    • Specify a WMI query
    • The GPO is applied only if the query returns true
    • Applies to entire GPOs
  • Item-level targeting (ILT)
    • Specify targeting criteria
    • A setting is applied only if the criteria match
    • Applies to individual settings (in case of registry settings: can also apply to a collection of settings)
    • Available for Group Policy Preferences (GPPs) only, not for Policies

Out of these four, two are interesting in terms of performance: WMI filters and item-level targeting. We are going to dedicate the rest of this article to them.

How to Measure WMI Query Performance

The execution time of WMI queries can be measured easily by executing the query through the PowerShell cmdlet Measure-Command. For increased accuracy we let the query run a thousand times. The command looks like this (the actual WMI query is bold):

Measure-Command
{for($i=0; $i -lt 1000; $i++)
{Get-WmiObject -query “SELECT * FROM Win32_OperatingSystem WHERE BuildNumber > ‘7000’“}}
| select TotalMilliseconds

Execution Time of Popular WMI Queries

Query Execution (ms) Description
SELECT * FROM Win32_OperatingSystem WHERE BuildNumber > ‘7000’ 17 Require at least Windows 7
SELECT * FROM Win32_OperatingSystem WHERE OSArchitecture = ’64-Bit’ 18 Require 64-bit Windows
SELECT * FROM Win32_Keyboard WHERE Layout = ‘00000407’ 8 Require German keyboard layout
SELECT * FROM Win32_ComputerSystem WHERE Name LIKE ‘vpc-%’ 7 Computername starts with a certain prefix
SELECT * FROM Win32_Product where Name like ‘%Adobe Reader%’” 11740 Require Adobe Reader to be installed

As the data in the table above shows, the exuction time of WMI queries is not so bad: 10-20 ms per GPO (remember: WMI filters apply to an entire GPO) typically do not significantly delay logon duration. However, there are exceptions. Most notorious is the Win32_Product WMI class. In a nutshell, do not use it unless you do not care about performance at all.

WMI Query Optimization

A not so well-known optimization is ask WMI for one specific attribute only instead of requesting all fields a class can store. In other words: replace the wildcard with an attribute that is guaranteed to have a value. With that small change the execution times for most queries drop by approximately 50%:

Query Execution (ms) Description
SELECT BuildNumber FROM Win32_OperatingSystem WHERE BuildNumber > ‘7000’ 9 Require at least Windows 7
SELECT OSArchitecture FROM Win32_OperatingSystem WHERE OSArchitecture = ’64-Bit’ 8 Require 64-bit Windows
SELECT Layout FROM Win32_Keyboard WHERE Layout = ‘00000407’ 8 Require German keyboard layout
SELECT Name FROM Win32_ComputerSystem WHERE Name LIKE ‘vpc-%’ 4 Computername starts with a certain prefix
SELECT Name FROM Win32_Product where Name like ‘%Adobe Reader%’” 11640 Require Adobe Reader to be installed

Performance Impact of WMI Filters

To evaluate which impact WMI filters have on Group Policy processing performance I created 100 GPOs with a single GPP registry value each. Then I compared:

  • No WMI filter
  • WMI filter on each GPO, returning true (I used the filter “SELECT Name FROM Win32_ComputerSystem WHERE Name LIKE ‘Citrix-%’
    “)

The result:

Group Policy - WMI filter performance

As you can see in the graph above adding a WMI filter to a GPO prolongs processing time for that GPO by about 9 ms. That is more or less the execution time of the WMI query we determined earlier. This tells us two things:

  1. You can gauge the overhead a WMI filter adds to your GPO processing time by timing the filter’s query independently with PowerShell
  2. WMI filter performance is much better than commonly believed

Item-Level Targeting

Item-level targeting (ILT), available for Group Policy Preferences only, can be used to reduce the number of GPOs by combining settings for different sets of users and different types of machines.

A Microsoft blog article says ILT is not inherently harmful and recommends not to run the following ILT evaluations that must work over the network against Active Directory to be evaluated:

  • OU
  • LDAP query
  • Domain
  • Site

Computer security group membership evaluation, however, is fast. Make sure to have installed KB2561285 on Windows Vista and Windows 7, though.

Grouping

ILT comes with an unnecessary architectural limitation: if you have a GPO with many settings with identical ILT filters, that one filter is run once per setting. The engine is not clever enough to realize that it already knows the answer from the previously applied setting. It would be nice to be able to apply ILT filters to groups of settings. That is possible (groups are called collections), but unfortunately only for registry values.

Performance Impact of Item-Level Targeting

To evaluate which impact GPP item-level targeting has on Group Policy processing performance I created 1 GPO with 100 GPP environment settings. Then I compared:

  • No ILT
  • ILT for computer name on each setting
    “)

The result:

Group Policy - item-level targeting performance

The overhead per ILT filter is small: only 2 ms in my test lab. But keep in mind that ILT filters are applied per setting. If you have many settings, you may have many ILT filters, too.

WMI Filters vs. Item-Level Targeting

Execution times for the actual filter queries are not too far apart. However, WMI filters are invoked only once per GPO while item-level targeting may be invoked many times per GPO (increasing total runtime).

WMI filters are stored in Active Directory while ILT filters are stored as files in SYSVOL. If a WMI filter returns false the GPO’s CSE settings files need not be fetched. In order to run ILT filters all CSE settings files need to be fetched because the filters are defined in them.

Continue reading with part 4 of this series.

 

GPO and Performance : Part 2

Foreground vs. Background Processing

Group Policy can be applied in the foreground and in the background.

Foreground processing occurs when:

  • The computer starts up (computer policy)
  • A user logs on (user policy)

Background processing occurs:

  • Every 90 minutes with a random offset of 0-30 minutes (can be changed)

During background processing many CSEs are invoked even when their settings are unchanged. This is governed by the registry value NoGPOListChanges and policy settings in Computer Configuration > Administrative Templates > System > Group Policy

Synchronous vs. Asynchronous Processing

Group Policy processing can be synchronous (the system waits for completion) or asynchronous (other things happen at the same time). Background processing is always asynchronous. Foreground processing can be either.

In order to reduce perceived startup duration Microsoft changed the default processing mode from synchronous to asynchronous in Windows XP. The caveat is that users are logged on without the full set of policies applied. This is typically not what we want in controlled enterprise environments. To force synchronous processing enable the policy setting Computer Configuration > Policies > Administrative Templates > System > Logon > Always wait for the network at computer startup and logon – which is really horribly named, by the way.

Important: During synchronous foreground processing all CSEs are invoked even if there have been not been any changes to their settings! This disables optimizations and may prolong the total policy processing time. As you may recall from part 1, some CSEs are always invoked, but others are normally only called when their settings have changed.

Timeouts

There are various timeouts built into Group Policy.

Everything

Timeout for Group Policy processing from start to end: 60 minutes. If a CSE has not finished after that, it is still being processed, but asynchronouosly. This may affect software installations, for example, but you should not use Group Policy for software distribution anyway. Did anybody even try? I mean, except those poor souls writing technical documentation for Microsoft?

Scripts

Startup, shutdown, logon and logoff scripts started through Group Policy are limited to 10 minutes. I have not tested what happens when a script reaches that age but I guess it will be terminated along with its child processes.

WMI Filters

WMI filters have a timeout of 30 seconds. Longer running WMI filters are aborted and treated as false.

Drive Mappings

Group Policy Preferences drive mappings are limited to 30 seconds – each.

Important: if target servers are unavailable, logons become slow. The typical delay is 5-7 seconds – per mapping. Trying to map three drives to nonexistent servers in some obscure part of the logon script may easily cost your users 20 seconds. Every single time they log on or start a published application. This happens way more often than one should think it does.

Loopback Modes

Merge Mode

Merge mode forces two Group Policy passes for user settings. The first pass according to the position of the user object in the OU hierarchy (as usual), the second pass according to the position of the computer object in the OU hierarchy. The results from both passes are then merged.

I strongly advise against using merge mode, only partly because of the performance degradation. The main reason I cannot recommend it is that it easily causes confusion as to which settings apply when.

For completeness sake: even with merge mode there is only one pass for computer settings.

Replace Mode

In replace mode the location of the computer object replaces the location of the user object. Apart from that everything happens as always. There is no change in policy processing duration and maintenance is a lot easier than with merge mode.

Logging

Group Policy logging can be enabled by setting the DWORD registry value GPsvcDebugLevel to 0x30002 in the key HKLMSOFTWAREMicrosoftWindows NTCurrentVersionDiagnostics. Additionally, and this is easily forgotten, the directory %windir%debugusermode must be created.

Once the registry key is in place and the directory exists logging is enabled (no need to reboot). Log messages are written to the file gpsvc.log in the usermode directory.

Deciphering gpsvc.log is not the easiest task. This 7,700 word article on Microsoft’s Ask the Directory Services Team blog tries to help.

Group Policy Preferences logging can be enabled through Group Policy. Take a look at these settings: Computer Configuration > Policies > Administrative Templates > System > Group Policy > Logging and tracing.

Multithreading

The Group Policy service is single-threaded, so it does not benefit from multiple CPUs. Only exception: during background processing user and computer run in separate threads.

Disabling GPO Sides

Let’s conclude this post by answering the following question: Is it worth disabling the computer or user side of a GPO?

Group Policy Settings Details tab 2

I must admit have applied this “optimization” since the days of Windows 2000. The basic idea is that if Group Policy has less things to worry about processing should be faster. So, if we have GPOs that only contains user settings anyway, why not disable the computer side of that GPO, and vice versa?

To determine the effects of disabling one GPO side I created 40 GPOs and measured the processing time per GPO. Then I disabled each GPO’s user side and repeated the process. The result:

Disabling GPO sides

As you can see, the performance gain of disabling one GPO side is negligible.

This is because if a policy side is unused, the only overhead will be in querying Active Directory to determine that, and the same query must be performed to view the disable option as the one that occurs to determine whether any CSEs have been implemented for that side of the GPO.

Continue reading with part 3 of this series.