Tuesday, December 29, 2015

The long quest to reinstate SharePoint Explorer View performance

In our company, business end-users like SharePoint Explorer View for its usage familiarity with shared folders access. Our business end-users reported non-performance of the SharePoint Explorer View. It used to perform satisfactory, but since a while the responsiveness structural degraded. Taking 30 seconds or even minutes to open the Explorer View initial. On immediate retry, it typically opens direct. Wait a while, and the problem repeats itself.
It proofed problematic to establish the root cause of this performance degradation. SharePoint Explorer View functionality depends on a stack of IT components, local, network and SharePoint server side.
We identified – based on literature study (internet search), and own common sense – the following list of potential causes:
  1. Slow SharePoint processing (IIS, SharePoint code)
  2. Slow SharePoint content retrieval (SQL, storage)
  3. Slow or blocking network
  4. Slow authentication handling (NT Explorer as web client to SharePoint server, e.g. see Prompt for Credentials When Accessing FQDN Sites From a Windows system)
  5. Slow Web Proxy Auto Detection (see Explorer View very poor performance, Slow response working with WebDAV resources on Windows systems)
  6. SMB protocol blocked (Source: Microsoft Whitepaper "Understanding and Troubleshooting SharePoint Explorer View")
  7. Interference with IIS WebDAV Publication Service at SharePoint server (SharePoint – Open with Windows Explorer – problems, 5. WebDAV Publishing, and Explorer view does not work in some scenarios when the SharePoint farm is on Windows Server 2008 R2)
  8. Conquer with network requests from other local running programs
  9. Network requests blocked or delayed by anti-virus processing
  10. Slowness in local WebDAV client processing (Windows Explorer as WebDAV client)
  11. Outdated local Windows binaries (Explorer binaries (Shell32.dll), WebDAV binaries (Webclnt.dll, Davclnt.dll, Mrxdav.sys) & SMB binaries (Mrxsmb.sys, Mrxsmb20.sys, Rdbss.sys))
  12. Interference with Internet Explorer AddOn's
It was hard to identify whether any of the above really had the negative impact. This is in particular due that it is difficult to get the overall view on the processing of all the involved IT components: local, virus scanner, network, firewall, load-balancer, server side, SQL, storage, … In reality it boils down on inspecting the behaviour of each single component, and then try to correlate it with the behaviour of the other components to gain insight in the overall picture. We inspected IIS logs to detect whether the long times originated on SharePoint processing of the WebDAV protocol, but this did not result in rootcause identification. Then I applied Fiddler to detect what is going on over the wire. And as Fiddler is limited to http protocol, I also used Wireshark to dig deeper in on the wire level, and also including other protocols. None of these exercises did result in cause identification. If any investigation outcome, my cautious conclusion was that the delay is not on the wire nor SharePoint processing, but rather originates on the local client. To investigate that, I used ProcessMonitor; benchmarking scenarios with and without opening Explorer View. I did see some noticeable differences and thus suspects for the delay: AntiVirus processing, extensive additional operation of svchost.exe; but could not make either a final confirmation or refute of this as symptom or cause.
As the progress in problem investigation was stalled, we involved Microsoft Premier Support. The engineers started with investigating our captures on problem occurrences – Wireshark, Fiddler, Process Monitor, Netsh. Their initial analysis confirmed my own finding that it was not due network delay. Next suspect on the Microsoft list was local interference with AntiVirus filters. To conform or exclude this as cause, we uninstalled the AntiVirus on a test client. Afterwards, the problem still manifested on that test client. Via a renewed scan of the network captures, Microsoft identified a 3rd suspect: delay caused on local Webclient due provider priority handling with SMB protocol above WebDAV. I actually though of this myself a few weeks before, inspired by an old (2006!) whitepaper of Microsoft Services Support. The symptoms of our issue namely matched with what is described in that whitepaper. However, for unclear reason – lack of understanding the complexity of the WebClient handling, no trust in whitepaper given its age? – my suggestion on this as problem area was not in-depth investigated by our operations service provider. As now Microsoft suggested basically the same, I convinced our operations to conduct a pragmatic validation to determine or exclude it as the problem cause. The simple first test is to block ICMP traffic on the SharePoint farm, and then re-test the performance of SharePoint Explorer View. But also for this the results were negative: no performance improvement in Explorer View observed after blocking ICMP traffic.
We finally had a breakthrough when we noticed that the Explorer View slowness did not occur in infra scenario in which the laptop via VPN connects to our SharePoint farm. It was then a matter of identifying the differences in the IT stack of components of on-premisse access versus VPN. Also I made Wireshark captures of the 2 different infra scenarios. In the Wireshark captures we noticed that in the on-premisse scenario, the retrieval of the browser Proxy Auto Configuration (PAC) file repeatedly timed-out. In the VPN scenario, this effect did not occur. The explanation of this is the different network path to retrieve the PAC file, and that of the on-premisse situation included a blocking IT node (which actually is a cloud-hosted solution).
As it turned out, this was indeed the root cause. We resolved the blocking of the PAC file through caching it on the network perimeter, and immediate we regained the performance of SharePoint Explorer View.
Bonus: the last mile for accelerating Explorer View performance ...
With this infra correction, performance of using ‘SharePoint Explorer View’ is greatly improved: opens up almost immediately, < 1 second. That is, except for the very 1st time: that takes 4-6 seconds.
The cause of this is on Windows OS level: Windows Explorer utilizes WebClient component to connect via WebDav protocol. And the very 1st time this WebClient component must be instantiated / brought to live, what takes the additional 3-4 seconds.
If you also want to get rid of those seconds, you can set the startup mode of WebClient to ‘automatic’:

Monday, December 21, 2015

Our approach to deliver aligned business solutions

In my role as Collaboration Solution Architect I align with our business stakeholders to deliver company internal collaboration solutions. In these solution alignments, the technology is of secondary nature. Its impact is both as enabler as well constraining. Our technology options are in line with the generic enterprise IT architecture: SAP and Microsoft unless. So we apply SAP platform for some dedicated collaboration scenarios, and Microsoft for almost all the rest. As most SharePoint customers came to realize, we also impose governance in the collaboration solution setups we are allowed to deliver to our business. The overarching principle is to deliver only future-proof solutions. This translates to apply standard / out-of-the-box platform capabilities where feasible, and restrain from building custom solutions just for the sake of building custom solutions. And in case custom solution is needed to deliver on the requested functionality, then comply with the Microsoft guideline to stay away from farm-based solutions and instead rely on the Add-In model.
So, how does this work in practice? Let me clarify by a true example. Few months ago I was invited for an alignment meeting with our internal finance department. The title of the meeting was ‘BI portal’. In the meeting, the business stakeholders told about their functional intent. And extended on all of their requirements. Being aware of the technology strategy, they upfront realized that the target platform would be SharePoint. And they already themselves build an image on how the solution should look like.
Next step was to map their functional vision, and detailed requirements to feasibility. Initial focus here is to discuss and challenge on the functional vision. Not that I’m the subject expert, they are, but still it makes sense to ‘walk through’ the functional vision to evaluate it on true added business value. Next is to map the vision, and the detailed requirements, to the technology. How will the generic setup be, what can be delivered out-of-the-box via SharePoint features, what can be delivered through customization, what would require custom solutions, and what is not allowed due our governance constraints? Helpful in both alignments – the functional and the technical – is to use examples of solutions delivered for others, to trigger and inspire. These example solutions can be your own’s, but also of course from anywhere. The SharePoint community is very generous in sharing knowledge and experiences.
As follow-up of the alignment meeting, I sketched a potential solution direction on high level. I deliberately use PowerPoint as format for this, as that by its nature limits you for over-extensive writings. Outline of the solution architecture is: a) Sketch of the context, b) Main requirements, c) UI impressions (mockups), d) Global Design utilizing SharePoint platform capabilities, e) the information architecture. a) and b) serve to verify whether my understanding of the request is valid, and c) is to agree on the user experience.
After fine-tuning on business aspects with the stakeholders, next step is to get technical consent: d) and e). At minimal, communicate the setup of the solution direction with technical peers and our SharePoint operations. In our company we have formalized this via ‘Templace Control Teams’ per technology platform.
If and after both functional and technical alignment, next is to ‘build’ it in agile manner. Deliver a first version, not feature-complete yet, but it must already have functional meaning and value. Demonstrate in a ‘sprint-demo’ to business, discuss on behaviour and new insights, and deliver these in next ‘sprint’.
If the solution setup is restricted to SharePoint standard only, and potential customization as 'SharePoint content' (html, CSS, javascript), it is possible to build up the application direct in the production collaboration space. Although I then typically choose to first build it in my own 'development/playground' site, and after business agreement deploy it to the target location by repeating the 'content-based' provisioning.

Thursday, December 17, 2015

Beware: List Validation Settings also effective in Workflow execution

I’m provisioning a ‘BPM-light’ process in our SharePoint ‘business platform’. Utilized SharePoint building blocks are Document Sets, Content Types and SharePoint Designer Workflow. The ‘light’ solution worked correct when demonstrating on first evaluation-time ('sprint') to the designated end-users. Of course they had some additional fine-tuning requests, which is just the way to deliver and align on business functionality in agile manner. After I implemented some of the minor additional changes, the workflow no longer functioned and reported an error.
The workflow error message exposed a problem with setting a value in the current item, of type Choice field:
This worked before, what changed that made it fail now?
Well, the explanation was found in my recent changes: To ensure correct user input when creating a new Document Set, I added a Validation rule. The validation rule is simple, on creation the State value must always be ‘Draft’ (*)
The explanation of the introduced error when trying to set ‘State’ field-value from workflow execution, is that the validations are not limited to UI/form handling. The validation settings are applied on SharePoint level whenever a change to item is made. And thus also applied for the item change initiated from workflow.
I tried to build in a differentiator in the validation to only require ‘State=Draft’ upon Document Set creation. But I could not get a working validation rule for that. I tried:
  • =OR([Created]<[Modified],[State]="Draft")
  • =OR((INT(Modified-Created) > 0),(State="Draft"))
  • =OR((DATEDIFF(Modified,Created,"s")>0),(State="Draft"))
But all 3 formules return False, when on a modification time the state is set to value different then ‘Draft’. The explanation is that I noticed the values of ‘Created’ and ‘Modified’ are always both zero (0) on validation time. Likely a sequencing issue.
(*) Note: For optimal User Experience I would have preferred to modify the form itself, and hide or disable the ‘State’ field to avoid the user changing it to non-allowed value. However SharePoint does not support to customize the NewForm in case of Document Sets. The only option you have is to replace the standard DocumentSet page (_layouts/NewDocSet.aspx) for another server-side based version [How to: Create a New Document Set Form in SharePoint Server 2010 (ECM)], but our SharePoint governance rules (‘future proof solutions’) do not allow to do that.

Wednesday, December 9, 2015

Lessons learned with Add-In update execution

In our intranet we have Add-In instances installed throughout the site hierarchy. The content owners of subsites are enabled to utilize any of the provided set to their own will. In a release, IT takes care of automatic update of all the installed instances of Add-In(s) included in the release. This is done in a 2-fold approach:
  • Add the updated Add-In(s) to the SharePoint Add-In catalog;
  • Then first manual update ('Get It') the installed Add-In on the rootweb, for immediate result visible to end-users;
  • Next, via powershell script, update all the Add-In instances in the site hierarchy: traverse through the hierarchy, and on each hostweb that has the Add-In installed, execute 'Update-SPAppInstance'. Under the SharePoint hood, this delegates the Add-In update(s) to execution via a timerjob.
We've learned 3 important lessons with this Add-In update approach:
  1. The completion of the Add-In update throughout the site hierarchy is very time-consuming. In our intranet we've experienced elapse time of over 10 hours.
  2. Until the full completion of the Add-In update, it is not possible to manual update the same Add-In. Situation in which you want / need to do that - and we encountered one - is when in the release it is observed that after the Add-In update, the updated Add-In has an issue. As Add-In rollback is then not longer possible (once a Add-In update is executed), the remedy is to deploy a fix (on the update/fix).
  3. Most surprising: until the full completion of update of Add-In 'X', also the manual update of any other Add-In 'Y' is blocked for completion.
Lessons we took from these observations is that in intranet releases, we first complete all manual Add-In updates. This includes the potential additional installation of required 'fix' on 'updated Add-In'. And only once all manual Add-In updates are completed, and we have ascertained the effect of each is as expected; then execute the long-running Add-In update(s) via powershell maintenance script throughout the entire site hierarchy.

Wednesday, December 2, 2015

Peculiar but Explained: Access Denied on page in search result preview

This morning I noticed that a page present in search result, displayed 'Sorry, you don't have access to this page' in the preview pane. Peculiar as an authorization principle of SharePoint Enteprise Search is that the search result is security trimmed, and only returns results for which the logged-on user has authorization to see. Brainstorming with a colleague we came up with the explanation. The search result actually included the url of a SharePoint subweb, for which I do have read-access. The page configured as 'Welcome Page' (landing page) in this subweb is not [yet] checked in, and therefore not available for me. And as SharePoint Search previewing applies the SharePoint publishing 'Welcome Page' redirection when hovering over (sub)web url, this explains the 'Access Denied' experience in the preview pane. Above is confirmed: after that landing page has been checked in, now in preview I see that page impression instead of the 'Access Denied'.

Friday, November 20, 2015

Chrome the better performant browser for App/AddIn-model based applications

In the SharePoint App/AddIn model, each AddIn operates in it's own isolated runtime security context. The separation is achieved by app-individual DNS domains. Beyond security isolation, the individual DNS domains also has performance ramifications. Good and bad (see SharePoint App-Model + NTLM results in more 401’s). A positive effect on performance is that the individual DNS app-domains, enables the utilization of more parallel http connections by browsers. Modern browsers support parallel http connections, but limit this on host level (maximum http connections per host). In SharePoint App/AddIn context, the browser maximizes the number of http connections multiplied for the separate DNS domains.
The Chrome browser has an additional performance advantage. As only browser, it predicts the DNS destinations that will be requested in the handling of a request - DNS Prefetching & TCP Preconnect:
A unique and important optimization in Chrome is the ability to remember the set of domains used by all subresources referenced by a page. Upon the next visit to the page, Chrome can preemptively perform DNS resolution and even establish a TCP connection to these domains.
This prediction is based on previous visit(s). Upon subsequent visit, Chrome initializes http connections per dnshost, and up to the maximum of 6 per dnshost. For a SharePoint page with one or more AddIns on it, Chrome immediate sets up 1 to maximum 6 http connections for the SharePoint hostweb domain, and per AddIn app-domain also up to maximum of 6 http connections. Upon receiving and processing the SharePoint page, for the dependent urls for the contained AddIn(s), Chrome can immediate send the request(s) over the already instantiated http connections. IE, Firefox, Safari, Edge (?) all apply a more 'just-in-time' approach. Delay with opening (new) http connection only when and until needed.

Thursday, November 12, 2015

Tip: how-to pragmatic and quickly mitigate from malfunctioning Add-In in production

Earlier this year we brought our new SharePoint 2013 based intranet ('digital workplace') into production. Crucial functional element of the workplace-concept is that employees can customize their own homepage with 'Apps' that are relevant for them given their own role and personal interests. The majority of these functional Apps are technical provided as SharePoint Add-Ins - SharePoint-Hosted plus Provider-Hosted. In the project to deliver the new intranet, we put a lot of effort validating the correct operation of the diverse workplace Apps, in particular focus on their performance. But despite that, soon after 'Go-Live' we experienced a performance problem with one, which effectively 'killed' our intranet experience. As such is destructive for user-acceptance, we needed an approach to quickly, on-spot, mitigate the non-performance App; so that next with less pressure the rootcause could be investigated.
Since the "App" (aka, as Add-In instance) is installed on the personalized pages of the homepage, it was not really an option to remove and deinstall it: it would require automated modification of all the personalized pages to remove the App, that is if the particular employee has the App added to his/her own page. Such automoted modification of what in functional essence is under personal control of end-users, was and is unsellable. We needed an alternative approach, in which transparent to the end-users temporarily the App-execution is overruled.
Here it is important to understand how SharePoint deals with SharePoint Add-Ins. In technical sense, in the SharePoint page response an Add-In is nothing more than an 'iframe++'. The frame-src refers to the 'AppRedirect.aspx' launchpage, including information about the Add-In to launch (see Launching SharePoint Apps, launch sequence, AppRedirect.aspx for a good explanation of this). The browser first receives and renders the html-response for the SharePoint page, and next per included Add-In, resolves the resulting iframe.
<iframe src="https://.../_layouts/15/appredirect.aspx?redirect_uri=....&client_id=..." id="..." height="250px" frameborder="0" width="720px"></iframe>
The mitigation approach for non-functioning Add-In is now to on-the-fly in the browser-context, replace the 'AddRedirect.aspx' launchpage with an url to alternative 'application'. This can be as simple as an .htm content page. The direct effect is that users are abstracted from issues in the actual Add-In, and instead for now receive alternative html-snippet to replace the malfunctioning App. This approach does not require automated modification per each of the personalized pages, it is sufficient to include a ScriptEditor in shared webpartzone. It is an approach that can be executed from the SharePoint GUI, without code changes. And thus can be applied for SharePoint on-premisse as in Office 365 / SharePoint Online.
<script type="text/javascript"> var iframes = document.getElementsByTagName('iframe'); var i; for (i = 0; i < iframes .length; i++) { var src= iframes[i].src; if (src.indexOf('<AppRederict.aspx url with reference to mailfunctioning Add-In>') != -1) { iframes[i].src = 'https://.../Style%20Library/ Mitigation/CodeSnippetAsAlternativeForMalfunctioningAddIn.aspx' } } </script>

Tuesday, October 27, 2015

Don't: Publishing + 'Anyone who can read items'

I got a Lync ('Skype for Business') call from a colleague informing me that the layout of our SharePoint 2013 based 'Digital Workplace' appears broken. I checked, and indeed it looked awful. So I asked in our team whether anyone was doing something to our intranet in production. After first denial (natural behaviour??), the perpetrator was identified. A developer had made a change to the masterpage, saved but intentionally did not publish to avoid that regular readers would already see it. Strangely however, as soon as saved, all intranet visitors saw his work - which was evidently still 'works in progress'. The explanation that all saw the unpublished version was quickly found: the Masterpage Gallery was incorrect set that 'Draft' items can be seen by "Anyone who can read items"... Corrected this to "Anyone who can edit items".

Thursday, October 8, 2015

Inconvenient ‘Shared Column’ in Document Set

Have the following design for a light-weight business process:
  1. a Document Set to collapse all documentation involving a review process – the document to review, and accompanying documentation, review sheets, explanatory documentation, and so;
  2. the Document Set preset with a document template for document to be reviewed;
  3. a SharePoint Designer workflow associated with the content type of the document template to steer the review process;
  4. and a workflow on the docset library to "archive" the docset once the review on contained document is completed.
The 2nd / outer workflow is required in addition to the inner workflow on the contained document, as a SharePoint Designer workflow does not give you access to the logical container Document Set.
Challenge is how to communicate from the inner to the outer workflow. I thought about doing this via a shared property between docset and contained document. However, here SharePoint exposes one of its peculiarities: although on level of the contained document it appears as if you can set the shared field (it is editable in editform for the contained document); in reality the edit is ignored. Whether done explicit manually in the editform, or automated set in the workflow on document. The 'Shared Column' is strictly owned on the containing DocSet level, and it's values plus changes in that pushed to all the contained items.
Note:This Sharegate post describes how-to hide the Shared Columns in the Edit Dialog as they are actually not editable on Document Set contained level; so that at least your end-users will not get confused.
As it turns out, the concept of 'Shared Columns' can still be used to trigger from the 'inner' workflow on "contained" document, a waiting condition in the 'outer' workflow on DocSet. The key is to use 'Update item in Current List' in the inner workflow, and set the 'Shared Column' direct on the DocSet by selecting it via a matching value between 'contained' document and the DocSet. Inspiration coming from post SharePoint 2010: How to update Parent Folders Timestamps when Child contents have been modified'.

Thursday, September 24, 2015

Architecture decision: SharePoint-hosted AddIn or 'plain-old Html/JavaScript/CSS'

When does SharePoint-hosted AddIn has value above 'plain-old Html/JavaScript/CSS'?

As we developers like to jump in on anything new, design decision to deliver new functionality as a SharePoint AddIn is nowadays almost the default. I object against that. I consider the AddIn paradigm an useful addition to deploy functionality on SharePoint, but it is not the silver bullet. Beyond the advantages that the AddIn paradigm brings, using it also has its drawbacks:
  • Building functionality as AddIn results in extra build effort; code and testing for the AddIn installation, AddIn lifecycle management;
  • Deployment is more complex: add to SharePoint AppCatalog, and next add to site collection;
  • AddIn based deployment requires the presence of a SharePoint AppCatalog in the webapplication;
  • Reusing the AddIn functionality on multiple locations in the site structure, results in multiple AddIn instances that each operate in isolation. While this can be desired, it also means that an AddIn update must be repeated on multiple instances in the site structure; and may result in inconsistent behavior when not all AddIn instances are upgraded;
  • Cleanup of AddIn is more complex, and largely happens under the SharePoint covers and outside the control and visibility of site owner. I’ve seen examples of so-called Orphaned Apps (the initial name) that can only with difficulty be removed from the SharePoint content database.
For the above drawbacks, I as solution architect do not allow the decision for whether or not to build as SharePoint AddIn to be made on personal preferences of the individual developer. I apply the design principle “keep the development + deployment model as simple as feasible”. And in many cases, ‘modern Apps’ hosted in SharePoint can just as well be delivered via the ‘content-approach’: plain html5, javascript and css files, and upload these files as content to a SharePoint document library. Note that this approach also works for SharePoint 2010 (and even 2007, but I assume/hope there are very few left that actually still build new functionalities on a 2007 basis).
Of course there are cases in which the AddIn concept truly adds value. I identify the following:
  1. When the functionality must be customizable; either by content editor ico shared page, or end-user ico personalized page. For such you need AppPart properties, to customize the (individual) App Instance
  2. When the functionality will be deployed multiple times in different locations (sites, web applications), and with the flexibility to let if behave differently in different locations
  3. When you want to enable site owners, or even end-users via personalization, self to add and remove the functionality to a page (via the AppCatalog)
  4. For suppliers to package as direct installable ‘deployment unit’; either via Microsoft Store, or via local AppCatalog

Friday, September 11, 2015

SharePoint: platform, services API, or both?

Recent, SharePoint MVP Andrew Connell posed an argument to regard SharePoint from now on as a services API: Developers: SharePoint isn’t a Platform, SharePoint is a Service. His suggestion received multiple comments, mostly affirmative, some a bit cautious. A week later, Doug Ware published on his blog a reaction in which he warns against not utilizing SharePoint as a platform: Architects: SharePoint is a platform, treating it as only a service is a mistake.
My opinion in this verbal 'battle' with seemingly opposite perspectives is that both raise valid points. This is my addition to the discussion / position-taking (also posted as comment on Doug's post):
My view in these opposite perspectives is somewhere in the middle. In the past, the SharePoint community regarded and applied SharePoint as a 1) functional, 2) development and 3) deployment platform. The first still holds, for me it is ridiculous to eg not utilize the OOTB DMS features if you have SharePoint at hand in your company. The latter 2 however have changed: we can still use SharePoint to “build upon”, but we must not do that anymore by deploying to SharePoint infra. So for development, less to even no more regard / utilize SharePoint as platform in favor of services model; and as deployment model the SharePoint farm is in future-proof sense an absolute no-go. Note that also on functional level, Microsoft can unpleasant surprise with functions before strongely advocated, have disappeared in the next version. This is ao the result of the long design and development timespans for SharePoint as product itself. The bare essence of SharePoint 2010 was designed even before the year 2007, 2013 before 2010? In the IT world that is an eternity, the world is completely different. That is one of the charmes and benefits of online products, it enables SaaS providers (Salesforce.com, Microsoft, SAP, …) to respond in much shorter timeframe to new IT possibilities and changed market requests.

Monday, August 17, 2015

Project-level Best Practices for delivering a performant Add-in based application

Indeed, all of the below are open doors!!
But yet in practice often either overlooked, or on purpose but due wrong reasoning (e.g. save time now in the project) bypassed in development project(s) to deliver a SharePoint Add-in based application:
  1. Performance and Capacity Management must be applied as an integral subject during the application development project
  2. In the requirements, agree with the application owner on the performance aspects: page download times considered acceptable, applicaton parallel usage
  3. Include performance Best Practices in the development guidelines. And make sure that all involved developers know of the guidelines, and that they apply them in their individual developed Add-Ins
  4. Do not only develop + test SharePoint Add-ins ('Apps') in isolement; also conduct integration test with multiple Add-ins on a page as the user is likely to use them, and monitor the page payload
  5. Thorough intake of any external Add-ins before before purchase; intake on architecture, functionality, capacity management and maintenance aspects
  6. Structural monitor and proof the application performance during the project, to detect in early stage when something is introduced with a severe performance decrease impact. Load testing is a good means to implement this in the project, for performance quality assurance

Some architectural and technical tips

  • Cache where appropriate (but be aware that caching costs resources itself)
  • Reduce the number of network roundtrips from client to application server - batch requests
  • Retrieve resources that are used in multiple Add-ins from a shared location - e.g. root site of the HostWeb, or a CDN (external or internal)
  • Reduce the impact of (NTLM) Authentication by retrieving non-authorized resources from anonymous accessible location
  • Utilize the sprites-concept for images
  • And the same for custom javascripts; for maintenance it is good to separate responsibilities in different libraries, but for performance it is better to collapse in a single resource file
  • Apply minimalization in resource files: javascript and CSS resource files
  • Apply lazy loading where appropriate (e.g. avoid processing and retrieval impact for Add-in functionality that is initial non-visible, and/or only rare used; in favor of delayed execution if and only if the user intends to use the Add-in)

Thursday, August 6, 2015

Load testing SharePoint Add-in (former App) Model

Validate healthy application performance behaviour

Essential for any Enterprise Application is that it can performant and scalable handle the varying usage load by the users. Nothing as embarrasing as a new application that soon after Go-Live, breaks by the enthiousastic usage of the users. To prevent such, you must build trust in the scalability of the application, and establish before Go-Live that the application – application software + system infra - can handle the expected load. Introduce load testing.
This also holds for modern SharePoint application that is composed with the Add-in model, former SharePoint App-model. But loadtesting of AddIn model does bring some extra peculiarities to loadtesting. I enumerated below the ones I encountered.
And note: our loadtesting proofed both valuable and successful: initial the loadtest revealed some performance and scalability problems. We then made some essential changes in the application code (in particular in the applied custom Add-ins / Apps), until we achieved our usage load target goal. And at the crucial moment of Go-Live, the application did not give a blinch, and perfectly handled the usage load of > 14.000 users.

Application Performance health factors

2 health factors monitored:
  1. Responsiveness of the application for the user, measured as Page Download Time
  2. Scalability of the SharePoint infra, measured as CPU, Memory and I/O utilization on the servers

Application Performance validation approach

  1. Identify target goals for application utilization
  2. "Green zone"
  3. Proof the health factors at the target-utilization goals via load testing, to simulate the real usage
  4. Identify the ‘breaking’ point via increased load/stress testing
  5. "Red zone" - performance issues monitored
  6. Determine the rootcause of the issue; this can be non-optimal code, insufficient infra parts (CPU, memory, network throughput, database IOPS)
  7. Fix the issue(s)
  8. Repeat the validation, at step 2

Loadtest execution

loadtest preparation

  1. Identify the usage/application scenarios you will use to build trust. You should select scenarios for which you expect these will be used during typical usage. An heavy transaction that in the normal operation will only be rare executed, will have a neglectable effect on the application load.
  2. Establish the target load. This is the application load for average usage. For web applications, this is typically stated in ‘Page Visits per Second’. Note that this is different from Requests per Second / RPS. In nowadays modern apps, a single page visit encompasses multiple http requests: for the page itself, dependent resources as javascript and css, and javascript calls to execute service calls for data retrieval and application functions.
    The determination/specification of the concrete target value is a challenge on itself. One easily is tempted to overstress the target value - we have 'X' users, so the parallel application usage will be 'X * Y'... However, in reality those 'X' users do not continuously all hit the application: they log on at different times, stay on pages, use other applications, go to the coffee machine, ... In our setup we identified the target value twofold:
    1. Fact: as we were introducing a renewed intranet, we could reuse the application usage statistics of the current intranet;
    2. Prediction: determine the target value via Microsoft (Bill Baer) Capacity Management Formula, an unofficial best practice recommendation
    And in our situation, the 2 values determined via the different paths delivered about the same target value, which confirmed us that we determined a realistic value.
  3. Establish the heavy load: this is abnormal but still foreseenable application usage, in special circumstances.
  4. Determine how-to build trust: manual load testing, custom test software, or utilize a load test tool – e.g. HP LoadRunner, Visual Studio LoadTest.
  5. Get sufficient test accounts to simulate different users. This is also required to prevent cache effect during load test execution. E.g. continuous retrieving user profile values of the same user.
  6. Prepare the test context for the test accounts. E.g. if the application makes use of SharePoint user profile, then the user profile must be provisioned for the test accounts to ensure reasonable load behavior.

Specialities with setting up testscripts for Add-ins / Apps

  1. The load test scenario must join in App authentication flow. In essence, this means that SPAppToken value must be set as FORM POST parameter in submit request to appredirect.aspx. The value is runtime determined in the App launcher, and returned in the initial AppRedirect.aspx response.
    In the Visual Studio webtest recording, the reference is made to this hidden field in the response.
    We encountered that the SPAppToken value is not successful runtime retrieved. This can in some circumstances be corrected by monitoring the traffic via Fiddler, and set SPAppToken to a fixed value that you get from the Fiddler trace.
  2. FormDigest value returned in JSON response from contextinfo call instead of hidden FORM parameter in response body.
    Resolution is augment the Visual Studio loadtest: Add a Text Extraction Rule to extract the value from the /_api/contextinfo JSON response.
  3. Default, Visual Studio LoadTest execution does not mimic browser-cache, resulting that each dependent resource is requested over and over. You can change/fix this by configuring the loadtest script to ‘parseDependentRequests = false’.
  4. Visual Studio LoadTest does not include the execution of javascript in the browser. If required, the activity of the javascript code must be simulated in the test scripts.
  5. With multiple provider-hosted Apps in the load test scenario, the Visual Studio loadtest scenario can make error in the runtime construction of the load test recording and assign a wrong {app_?} value. In such case, you must manually add a '<ContextParameter Name="AppId_1" Value="<APP domain value>" />', and correct the relevant Requests in the script to send request to the correct app-domain:
  6. Visual Studio LoadTest recording misses to set header variable ‘Origin’, which hinders CORS protocol handling.
  7. You can easily overwhelm the usage load by setting the ‘concurrent user’ configuration value. The use of this configuration parameter is misleading: it does not really simulate actual users. It merely sets the threads in the loadtest execution from which to continuously execute the webtest(s) in the loadtest scenario. Per thread, after finishing the webtest, the execution halts for the thinktime value; and then repeats. If you set the thinktime to zero – which is what Microsoft advices on Technet, "Don't use think times…" -, the effect is that continuously requests are fired against your application. The load on the application is then much higher as the value configured in ‘concurrent users’.
  8. Visual Studio loadagent itself can become the limiting factor. If you want to simulate a larger concurrent usage, this results in equal large set of threads in the Visual Studio execution, all of which busy to execute and monitor a webtest instance. The cpu on the load agent grows to 100%, and the load does not linear increase with the number of ‘concurrent users’ aka threads.

Load test monitoring

  • CPU, memory and disk IO per server: WFE, SharePoint backend, AppHost
  • State of the IIS queue on WFE and AppHost
  • Page download times
  • Slowest pages

Interpretation of load test output

  1. The (average) Page Response Time is the summation of the download time for that request, AND augmented with including the download times of all dependent requests beneath that main request.
  2. The RPS / Requests per Seconds output is not fit to determine whether the application + infrastructure can handle the foreseen application usage. The application usage translates in Page Visits per Second, in which each page visit typically encompasses multiple (http) requests: the .aspx request, requests for javascript and css resources. In the App execution model, each App launch on the page is in effect an own page visit. As result, the RPS factor is of little use. You must measure the ‘Page Visits per Second’ factor. Pragmatic way to monitor this is to set the thinktime for webtest on 1 minute; so that each minute the webtest is executed. The ‘Page Visits per Second’ factor then equals the Visual Studio reported 'Test per Second'.

Wednesday, July 29, 2015

Handy resource for Excel Services external data troubleshooting

I have a 2-steps setup for a dashboard-solution provisioned via Excel-services:
  1. Functional data managers / compliance officers: maintain the data offline in Excel worksheets, and when the data maintenance effort is finished publish the worksheet as datasource to SharePoint document libary
  2. A separate 'View' dashboard connects via Excel Services to the 'datasource' Excel worksheets, and renders the dashboard - charts, KPIs and so on; This 'view' dashboard worksheet is via Excel Web Access rendered on the SharePoint dashboard page
Opening the dashboard page results in error message Unable to refresh data for a data connection in the workbook..... In the ULS log, only minimal relevant information was logged: "Refresh failed for <data connection> in the workbook....". Via internet search I found a very valuable resource, Excel Services data refresh flowchart (codename: Excel Services Troubleshooting). This helped me find and fix the problem.

Excel Services - 400 Bad Request due large cookie

Playing with Excel Services to compose a dashboard page, I suddenly encountered HTTP 400's on requesting the dashboard page. I monitored the request handling in Fiddler, it showed HTTP Error 400. The size of the request headers is too long. So I inspected the request, and noticed that the Cookie somehow had grown to a very large (string)value:
Pragmatic resolution to fix from the issue is to close all IE instances, and start a fresh IE session. That resolves the issue.

Friday, July 24, 2015

Excel SaveAs to SharePoint failing due required document library properties

In a business process we publish a snapshot from Excel workbook to SharePoint. The VBA code for this is simple: ActiveWorkbook.SaveAs "<url of document library>' & ActiveWorkbook.Name, FileFormat:=xlOpenXMLWorkbookMacroEnabled.
However, execution of this code results in error Run time error '1004': Index refers beyond end of list. The direct cause is that the document library includes a mandatory metadata field, and as this is not set in the Excel workbook, SharePoint refuses the upload. Sadly it appears not possible to pre-set Office 'Document Properties - Server' from VBA code.
2 pragmatic alternatives to workaround the issue:
  1. make the field / document library property non-required,
  2. or modify the field / document library property to have a default value
.

Wednesday, July 22, 2015

Convert Excel file into Excel Services compliant variant

Excel Services does not support the entire Excel feature set. A.o following aspects are not supported:
  • VBA code,
  • Macros,
  • Comments,
  • Shapes,
  • External Links, to external workbooks
  • (Formula) Names that refer to external workbooks,
  • Data validations,
  • space(s) in name of worksheet connected as Range Name
(see Differences between using a workbook in Excel and Excel Services)
If your 'source' Excel workbook contains any of the above, trying to use it in Excel Services - e.g. via Chart WebPart - results in the generic error 'Exception has been thrown by the target of an invocation'. Required step before using a 'source' Excel workbook in Excel Services is to convert it to a compliant variant. Excel itself does not include such a functionality. But you can facilitate the user (typical functional data management) via a VBA macro in the excel sheet.

VBA Code

Sub SaveWorkbookAsNewFile() Dim ActSheet As Worksheet Dim ActBook As Workbook Dim CurrentFile As String Dim NewFileName As String Dim NewFileType As String Dim NewFile As String Dim ws As Worksheet Application.ScreenUpdating = False CurrentFile = ThisWorkbook.FullName ThisWorkbook.Save RemoveDataValidations ActBook:=ActiveWorkbook RemoveComments ActBook:=ActiveWorkbook RemoveShapes ActBook:=ActiveWorkbook BreakLinks ActBook:=ActiveWorkbook RemoveNamesToExternalReferences ActBook:=ActiveWorkbook RemoveConditionalFormatting ActBook:=ActiveWorkbook RemoveSpacesFromWorksheets ActBook:=ActiveWorkbook RemoveVBA ActBook:=ActiveWorkbook NewFileType = "Excel Workbook (*.xlsx), *.xlsx," & _ "All files (*.*), *.*" NewFile = Application.GetSaveAsFilename( _ InitialFileName:=NewFileName, _ fileFilter:=NewFileType) If NewFile <> "" And NewFile <> "False" Then ActiveWorkbook.SaveAs Filename:=NewFile, _ FileFormat:=xlOpenXMLWorkbook, _ Password:="", _ WriteResPassword:="", _ ReadOnlyRecommended:=False, _ CreateBackup:=False End If ThisWorkbook.Close False Workbooks.Open CurrentFile, False Application.ScreenUpdating = True End Sub Sub RemoveVBA(ActBook As Workbook) On Error Resume Next Dim Element As Object With ActBook.VBProject For Each Element In .VBComponents .VBComponents.Remove Element Next For x = .VBComponents.Count To 1 Step -1 .VBComponents(x).CodeModule.DeleteLines 1, .VBComponents(x).CodeModule.CountOfLines Next x End With End Sub Sub RemoveDataValidations(ActBook As Workbook) Dim ws As Worksheet For Each ws In ActBook.Worksheets ws.Cells.Validation.Delete Next ws End Sub Sub RemoveComments(ActBook As Workbook) Dim ws As Worksheet Dim xComment As Comment For Each ws In ActBook.Worksheets For Each xComment In ws.Comments xComment.Delete Next Next ws End Sub Sub RemoveShapes(ActBook As Workbook) Dim ws As Worksheet Dim sh As Shape For Each ws In ActBook.Worksheets For Each sh In ws.Shapes sh.Delete Next sh Next ws End Sub Sub BreakLinks(ActBook As Workbook) Dim Links As Variant Links = ActBook.LinkSources(Type:=xlLinkTypeExcelLinks) For i = 1 To UBound(Links) ActBook.BreakLink _ Name:=Links(i), _ Type:=xlLinkTypeExcelLinks Next i End Sub Sub RemoveNamesToExternalReferences(ActBook As Workbook) Dim nm As Name For Each nm In ActBook.Names If InStr(nm.RefersTo, "[") <> 0 Then nm.Delete End If Next End Sub Sub RemoveConditionalFormatting(ActBook As Workbook) Dim ws As Worksheet For Each ws In ActBook.Worksheets ws.Cells.FormatConditions.Delete Next ws End Sub Sub RemoveSpacesFromWorksheets(ActBook As Workbook) Dim ws As Worksheet For Each ws In ActBook.Worksheets ws.Name = Replace(ws.Name, " ", "_") Next ws End Sub

Sunday, July 12, 2015

Tabbed view on document library

The ‘group by’ native functionality of XsltListViewWebPart is convenient to present a classified view on the contents of a SharePoint List / Library. Requirement of one of our departments is to go beyond that, and provide a tabbed view on the document library: a tab per month per year.
To achieve this, one basically has the following options:
  1. Build a custom webpart. However, this is so old-school SharePoint platform utilization; and in our company by default disallowed.
  2. Build a custom HTML / javascript UI (App), and connect via SharePoint webservices. Although this setup nicely fits in ‘modern SharePoint App-development’, for this specific scenario the drawback is that you then also need to develop yourself all of the Office integration that SharePoint standard delivers for a rendered document library (via ECB menu).
  3. Reuse XsltListViewWebPart, but specify an own Xslt-styling. This approach suffers from the same approach as the ‘modern App’ alternative: you’re then required to include in the custom Xstl all the code to render Office-integration friendly.
  4. Reuse XsltListViewWebPart, and dynamically modify the standard grouped-by layout into a tabbed view. Beyond reuse of the native Office-integration, this approach also reuses the lazy loading per group that is native in the XsltListViewWebPart group-by handling. Especially with a larger document library, this makes it much more performant as to retrieve the entire contents at once.

Client-side transfrom group-by layout into tabbed view

The transformation of the standard group-by layout into a tabbed view can be achieved as full client-side code only. To achieve the effect, I inspected the standard delivered html; and next coded the transformation logic in jQuery.

Particulars

  • The native 'group-by' functionality renders the header(s) of the groups. In a tabbed-view layout, the selected tab however already visualizes which group selected; and the group-headers are undesirable in the rendering.
  • The native 'group-by' functionality opens new group in addition to the one(s) already open. For a tab-view experience, the views must be exclusive, act as toggles. Select one tab, automatically closes the tab selected before.
  • The native 'group-by' functionality also includes a 'remember' function: by default a grouped-by layout opens with the group(s) opened as when the visitor was last on the page. For a consistent user-experience, it is then required to pre-select the associated tab-button.

The 'App' code

<style type="text/css"> .et-tab { <ommitted…> } .et-tab-active { <ommitted…> } .et-tab-inactive { <ommitted…> } .et-separator { height: 5px; background-color: rgb(134, 206, 244); } </style> <script> var TabbedListView = window.TabbedListView || {}; TabbedListView.UI = function () { function MonthToInt(month) { <ommitted…> } function getCookieValue(cookieName) { if (document.cookie.indexOf(cookieName) != -1) { var cookies = document.cookie.split("; "); for (var cookieSeq in cookies) { var cookieSpec = cookies[cookieSeq]; if (cookieSpec.indexOf(cookieName) != -1 && cookieSpec.indexOf("=") != -1 ) { return unescape(cookieSpec.split("=")[1]); } } } return undefined; } function TabbedView() { var tabrow = $("<div class='et-tabrow'></div>"); $(".ms-listviewtable") .before($(tabrow)) .before("<div class='et-separator'></div>"); $(".ms-listviewtable").children().each(function(i) { // Grouping-row: level 0 or level 1 if ($(this).attr("groupString") !== undefined) { // Month - lowest group level. if ($(this).children("[id='group1']").length > 0) { var action = $("<a></a>"); // Set the buttonlabel := '<month> <year>' by extracting // the values from the original headings. var monthValue = $(this).find("a").parent().clone() .children().remove().end().text().split(" : ")[1]; var parentId = $(this).attr('id') .substring(0, $(this).attr('id').length - 2); var group0 = $(this).parent().children("[id='" + parentId + "']"); var yearValue = $(group0).find("a").parent().clone() .children().remove().end().text().split(" : ")[1]; $(action).text(monthValue + " " + yearValue); $(action).click(function() { var parentId = $(this).parent().attr('id'); var parentTBodyId = "titl" + parentId.substring(0, parentId.length -2; var actualAA = $(".ms-listviewtable") .find("tbody[id='" + parentTBodyId + "']").find("a"); if ($(actualAA).find('img').attr('src') .endsWith("plus.gif") ) { $(actualAA).trigger('click'); } var actualA = $(".ms-listviewtable") .find("tbody[id='titl" + parentId + "']").find("a"); $(actualA).trigger('click'); if ($(this).parent().hasClass("et-tab-inactive")) { $(".ms-listviewtable").children().each(function(i) { if ($(this).attr("groupString") !== undefined) { $(this).hide(); } }); $(".et-tabrow").children().each(function(i) { if ($(this).hasClass("et-tab-active")) { $(this).find("a").click(); } }); $(this).parent().removeClass("et-tab-inactive"); $(this).parent().addClass("et-tab-active"); } else { $(this).parent().removeClass("et-tab-active"); $(this).parent().addClass("et-tab-inactive"); } }); // Add 'tab-button' to tab-row; in chronological sorted order. var button = $("<span class='et-tab'></span>"); $(button).attr('id', $(this).attr('id').substring(4, $(this).attr('id').length)); $(button).append($(action)); var totalMonths = parseInt(yearValue) * 12 + MonthToInt(monthValue); $(button).data('TotalMonths',totalMonths); var added = false; $(".et-tabrow").children().each(function(i) { if (!added && parseInt($(this).data("TotalMonths")) > totalMonths) { $(this).before($(button)); added = true; } }); if (!added) $(tabrow).append($(button)); $(button).addClass("et-tab-inactive"); } $(this).hide(); } }); ExecuteOrDelayUntilScriptLoaded(function() { var cookieValue = getCookieValue("WSS_ExpGroup_"); var group1Opened = false; if (cookieValue !== undefined) { var expGroupParts = unescape(cookieValue).split(";#"); for (var i = 1; i < expGroupParts.length - 2; i++) { if (expGroupParts[i+1] !== "&") { group1Opened = true; break; } else { i++; } } } if (group1Opened) { // XsltListViewWebPart standard behaviour includes a 'remember' // functionality: open the group(s) that was/were open before // refreshing the page with the grouped-view. Overload that behaviour // to make sure the 'tab-row' state is consistent with that. $.prototype.base_ExpColGroupScripts = ExpColGroupScripts; ExpColGroupScripts = function(c) { var result = $.prototype.base_ExpColGroupScripts(c); $(".ms-listviewtable").find("tbody[isLoaded]").each(function(i) { if ($(this).find("td").text() === 'Loading....') { var bodyId = $(this).attr('id') .substring(4, $(this).attr('id').length-1); var tabButton = $(".et-tabrow") .children("[id='" + bodyId + "']"); if ($(tabButton).hasClass("et-tab-inactive")) { $(tabButton).removeClass("et-tab-inactive"); $(tabButton).addClass("et-tab-active"); } } }); // Reset function ExpColGroupScripts = $.prototype.base_ExpColGroupScripts; return $(result); }; } else { $(".et-tabrow span:first-child").find("a").trigger('click'); } }, "inplview.js"); $(".ms-listviewtable").show(); } var ModuleInit = (function() { $(".ms-listviewtable").hide(); _spBodyOnLoadFunctionNames.push("TabbedListView.UI.TabbedView"); })(); // Public interface return { TabbedView: TabbedView } }(); </script>

Update: support for multiple XsltListViewWebParts on page

The above 'App' code works fine in case of a single XsltListViewWebPart on page. However, in our company we also have document dashboards that give entrance to 'archived' and 'active' documents. The above code requires some update to be usable for 1 or more XsltListViewWebPart instances on a single page.
<style type="text/css"> <ommitted…> </style> <script> var TabbedListView = window.TabbedListView || {}; TabbedListView.UI = function () { function MonthToInt(month) { <ommitted…> } function getCookieValue(cookieName) { var cookieNameLC = cookieName.toLowerCase(); if (document.cookie.toLowerCase().indexOf(cookieNameLC) != -1) { var cookies = document.cookie.split("; "); for (var cookieSeq in cookies) { var cookieSpec = cookies[cookieSeq]; if (cookieSpec.toLowerCase().indexOf( cookieNameLC) != -1 && cookieSpec.indexOf("=") != -1) { return unescape(cookieSpec.split("=")[1]); } } } return undefined; } var triggerCtxIsInit = false; function initTabSelection(webpartId) { var lstVw = $('div[WebPartID^="' + webpartId + '"]'); var cookieValue = getCookieValue("WSS_ExpGroup_{" + webpartId + "}"); var group1Opened = false; if (cookieValue !== undefined) { var expGroupParts = unescape(cookieValue).split(";#"); for (var i = 1; i < expGroupParts.length - 2; i++) { if (expGroupParts[i+1] !== "&") { group1Opened = true; break; } else { i++; } } } if (group1Opened) { // XsltListViewWebPart standard behaviour includes a 'remember' // functionality: open the group(s) that was/were open before // refreshing the page with the grouped-view. Overload that // behaviour to make sure the 'tab-row' state is consistent with that. if ($.prototype.base_ExpColGroupScripts === undefined) { $.prototype.base_ExpColGroupScripts = ExpColGroupScripts; ExpColGroupScripts = function(c) { var result = $.prototype.base_ExpColGroupScripts(c); $(".ms-listviewtable").find("tbody[isLoaded]").each(function(i) { if ($(this).find("td").text() === 'Loading....') { var bodyId = $(this).attr('id') .substring(4, $(this).attr('id').length-1); var tabButton = $(".et-tabrow").children("[id='" + bodyId + "']"); if ($(tabButton).hasClass("et-tab-inactive")) { $(tabButton).removeClass("et-tab-inactive"); $(tabButton).addClass("et-tab-active"); } } }); return $(result); }; } } else { triggerCtxIsInit = true; $(lstVw).parent().find(".et-tabrow span:first-child") .find("a").trigger('click'); triggerCtxIsInit = false; } } function TabbedView() { $(".ms-listviewtable").each(function(i) { ExecTabbedView($(this)); }); } function ExecTabbedView(lstVw) { var tabrow = $("<div class='et-tabrow'></div>"); $(lstVw).before($(tabrow)).before("<div class='et-separator'></div>"); $(lstVw).children().each(function(i) { // Grouping-row: level 0 or level 1 if ($(this).attr("groupString") !== undefined) { // Month - lowest group level. if ($(this).children("[id='group1']").length > 0) { var action = $("<a></a>"); // Set the buttonlabel := '<month> <year>' by extracting // the values from the original headings. var monthValue = $(this).find("a").parent().clone().children() .remove().end().text().split(" : ")[1]; var parentId = $(this).attr('id') .substring(0, $(this).attr('id').length - 2); var group0 = $(this).parent().children("[id='" + parentId + "']"); var yearValue = $(group0).find("a").parent().clone().children() .remove().end().text().split(" : ")[1]; $(action).text(monthValue + " " + yearValue); // Add clickhandler to: // - check the 'parent-group-header in the table whether already // opened; if not trigger it to open. This is required to reuse // the standard XsltListViewWebPart behaviour wrt remember // state upon refresh. // - invoke the 'original' one of the group-header A in the // table; to trigger the default behaviour // - if 'selected': // - hide the headings that are visualized by the default // clickhandler // - deselect the 'tab' that is current active // - visualize the 'tab' to display as active // - if 'deselected' // - visualize the 'tab' to display as inactive $(action).click(function() { // On first user-initiated click; reset the overload of // ExpColGroupScripts as only applicable on initialization. if (!triggerCtxIsInit && $.prototype.base_ExpColGroupScripts !== undefined ) { ExpColGroupScripts = $.prototype.base_ExpColGroupScripts; $.prototype.base_ExpColGroupScripts = undefined; } var parentId = $(this).parent().attr('id'); var tabrow = $(this).parents('div[class^="et-tabrow"]'); var lstVw = $(tabrow).parent() .find('table[class^="ms-listviewtable"]'); var actualAA = $(lstVw).find("tbody[id='titl" + parentId.substring(0, parentId.length -2) + "']") .find("a"); if ($(actualAA).find('img').attr('src') .endsWith("plus.gif") ) { $(actualAA).trigger('click'); } var actualA = $(lstVw).find("tbody[id='titl" + parentId + "']").find("a"); $(actualA).trigger('click'); if ($(this).parent().hasClass("et-tab-inactive")) { $(lstVw).children().each(function(i) { if ($(this).attr("groupString") !== undefined) { $(this).hide(); } }); $(tabrow).children().each(function(i) { if ($(this).hasClass("et-tab-active")) { $(this).find("a").click(); } }); $(this).parent().removeClass("et-tab-inactive"); $(this).parent().addClass("et-tab-active"); } else { $(this).parent().removeClass("et-tab-active"); $(this).parent().addClass("et-tab-inactive"); } }); // Add 'tab-button' to tab-row; in chronological sorted order. var button = $("<span class='et-tab'></span>"); $(button).attr('id', $(this).attr('id') .substring(4, $(this).attr('id').length)); $(button).append($(action)); var totalMonths = parseInt(yearValue) * 12 + MonthToInt(monthValue); $(button).data('TotalMonths',totalMonths); var added = false; $(tabrow).children().each(function(i) { if (!added && parseInt($(this).data("TotalMonths")) > totalMonths) { $(this).before($(button)); added = true; } }); if (!added) $(tabrow).append($(button)); $(button).addClass("et-tab-inactive"); } $(this).hide(); } }); var webpartId = $(lstVw).parents('div[WebPartID^!=""]').attr('WebPartID'); ExecuteOrDelayUntilScriptLoaded( function () { initTabSelection(webpartId) }, "inplview.js"); $(lstVw).show(); } var ModuleInit = (function() { $(".ms-listviewtable").each(function(i) { $(this).hide(); }); _spBodyOnLoadFunctionNames.push("TabbedListView.UI.TabbedView"); })(); // Public interface return { TabbedView: TabbedView } }(); </script>

Tuesday, May 19, 2015

Beware: BLOB cache may miss modifications via SPD

There are 2 alternative approaches to manually (not via a SharePoint solution) update content resources (.css, .js, page layouts, masterpage) that are provisioned in a SharePoint library. Via the browser: you can download the content resource/file, make the changes in the downloaded file, and upload, checkin and publish the modified file into the SharePoint library. Alternative approach is to open the SharePoint site in SharePoint Designer, open + edit the content resource direct from within SPD, and afterwards checkin + publish the modified file from SPD.
Although both approaches work to modify the content file administrated in a SharePoint library, the SPD alternative has a caveat. This week we noticed that the SharePoint BLOB cache may miss the modification trigger when the modification is done through SPD. One of our developers changed a CSS file via SPD, but when browsing the site did not see the effect of his modifications. I immediately had my suspicion towards the BLOB cache. To verify, I inspected the BLOB cache folder on the WFEs, and noticed that on all 3 of the WFEs the date of the file in cache was earlier as the published date of the file with modifications. All other files in the cache appeared up-to-date, so it was not a situation of complete BLOB cache corruption. Merely the file changed via SPD was outdated in the cache. Pragmatic resolution here is to delete the outdated file from the BLOB cache folder on all WFEs.

Wednesday, May 13, 2015

Takeaways from SharePoint YamJam

The SharePoint productteam augmented with some MVPs, hosted a SharePoint YamJam tonight. A.o. Bill Baer and Benjamin Niaulin were present to answer questions and share additional insights on what has been presented previous week at MS Ignite. Below I've summarized my main takeaways from the interesting YamJam.
[Bill Baer]
For NextGen Portal experiences such as Delve, Video, etc. we're investing in bringing them to our on-premises customers via hybrid scenarios as many take dependencies on technologies we cannot package on a DVD, I.e. WAMS, Office Graph, etc.
[Benjamin Niaulin]
You'll want to start using Groups for Office 365 as much as possible for Team Collaboration as a lot of things will tie in to it, including the new portal. Get familiar with Delve right away.
[Bill Baer]
SharePoint Designer will not be shipped with SharePoint 2016; however, SharePoint Designer 2013 can be used with SharePoint 2016.
[Bill Baer]
SharePoint social capabilities as designed and delivered in SharePoint 2013 will be carried forward into SharePoint 2016 in addition to new integrated Yammer experiences thru hybrid to include Post to Yammer, etc. from SharePoint Document Libraries.
[Bill Baer]
In SharePoint Server 2016 updates are orchestrated differently mitigating the offline upgraders as experienced in earlier versions of SharePoint, we're moving to a B2B online upgrade model that alleviates the need for post-patch experiences.
We'll share more details soon, at a high level patching remains thru MSPs (w/ significant reduction), but now with 0 downtime, removing dependencies between FE and BE components and now upgraders are all done online.
[Benjamin Niaulin]
Important: InfoPath Forms Services will be in SP2016 but not InfoPath client.
My experience, though it may change: When I installed Office 2016 it removed my InfoPath 2013 on my computer. May have been a bug or something still being worked on.
. . . That’s not a bug. It’s a feature (or rather a “known issue”). Microsoft explains it at bullet No. 4 here: Known issues for Office 2016 Preview
[Bill Baer]
We'll continue to invest in broader availability of endpoints to support search scenarios, particularly connectors, and deliver on the connectors we already ship. In addition we're shipping some new APIs to support surfacing external sources in Delve (as demonstrated at \\build) and have a number of partners in our search TAP program that are actively building connectors using some of our new search experiences and endpoints.
[Mike Holste]
Regarding question: how to decide when to use Yammer and SharePoint and Office 365 Groups; Check out this Channel9 recording of Ignite session for additional guidance on what to use and when: How to Decide When to Use SharePoint and Yammer and Office 365 Groups and Outlook and Skype
[Benjamin Niaulin]

I think it'll be hard to give as it'll vary based on each organizations needs. The SharePoint Team Site is still the "big" collaboration solution with many libraries, workflows, metadata, term store, etc... It's like the ECM.

Groups is more about Team Collaboration. It pulls things from different products and provides an easy to consume solution that works well from anywhere and on any device with the updates coming.

I have a Group for:

- Blogs

- A Specific project I am doing with others (O365 Guide)

- Sharegate Marketing (especially for Calendar and OneNote)

and anyone can create a new group and get started.

The Team Site is a little bigger and requires heavier thinking, Content Types, how to place everything so that it makes sense etc.

It'll come down to knowing SharePoint vs knowing O365 Experiences and compare to see which fits best for your customers individual needs.

The reality is the users in the organizations are already using other things all over the internet for free or low cost $ per user per month. Bypassing IT altogether.

Groups provides an alternative they can consume without having to go to IT to request a heavy duty Site that requires SharePoint Training (even though it's SharePoint behind it)

[Bill Baer]
On the development side, we'll continue to support FTC, invest more in hybrid apps via CAM, in addition to bringing much of the cloud experience to on-premises to draw parity between developing for the service and on-premises. Namely subscription apps, common consent, and Office 365 APIs as initial investment areas.

Sunday, May 3, 2015

Beware of script-dependencies with AMD loading

Asynchronous Module Definition is very useful to manage the loading of (larger) sets of libraries in the JavaScript runtime engine. Instead of explicit in own code take the responsibility for loading each needed javascript library one by one, it is more managable to delegate this to one of the AMD implementations. Require.js is by my knowledge most applied currently, but there are alternatives.
Our developers have implemented AMD loading in multiple of our custom build SharePoint Apps, utilizing Require.js. On inspecting (F12, Fiddler) the request/response traffic of our App-Model based intranet, we observed that sometimes a specific script file is requested twice, and the 2nd time from wrong url and not in minimized version. When I asked the App developer about it, he could not explain: from his code, he explicitly specified to load the minimized library. Also weird that the re-request does not occur always.
I decided to inspect runtime behavior plus the App code myself, and try to analyze (or rather, puzzle) what caused the behavior.

Code inspection

require(['../Scripts/AppHelpers'], function () { var spHostUrl = decodeURIComponent(getQueryStringParameter('SPHostUrl')); var hostProtocol = spHostUrl.split("//")[0]; var hostRoot = spHostUrl.split("//")[1].split("/")[0]; spHostUrl = hostProtocol + "//" + hostRoot; require([spHostUrl + '/Style%20Library/Scripts/jquery-1.11.1.min.js'], function () { require([spHostUrl + '/_layouts/15/MicrosoftAjax.js', spHostUrl + '/_layouts/15/init.js', spHostUrl + '/_layouts/15/sp.runtime.js', spHostUrl + '/_layouts/15/sp.js', spHostUrl + '/_layouts/15/sp.requestexecutor.js', spHostUrl + '/_layouts/15/sp.core.js', spHostUrl + '/_layouts/15/sp.init.js', spHostUrl + '/_layouts/15/ScriptResx.ashx?culture=en%2Dus&name=SP%2ERes', spHostUrl + '/_layouts/15/sp.ui.dialog.js', "../Scripts/jquery.rotate.js", "../Scripts/moment.min.js", "../Scripts/moment-timezone.min.js", "../Scripts/ListController.js", "../Scripts/UserSettings.js", "../Scripts/sp.communica.js", "../Scripts/App.js"], function () { jQuery(document).ready(function () { initialize(function () { }); }); }); });
Basically, the above code instructs require.js to first load the library ‘/Scripts/AppHelpers.js’, once that is loaded to load jQuery library, and once that is loaded, load a bunch of other libraries that are a.o. dependent on jQuery. And when all libraries loaded, invoke a custom initialization function (not displayed here, as not relevant for the issue).

Runtime analysis, via Fiddler and F12

In Fiddler, often however not always, the following sequence of requests is visible.
So first ‘/Scripts/moment.min.js’ is succesful requested, followed by unsuccesful (HTTP 404) request for ‘/Pages/moment.js’. The initiator of the http-request is setting the ‘src’ property of a <Script> element. Likely this is initiated from require.js handling.
I also inspected the runtime DOM. Herein it becomes clear why the browser requests the library a second time. And also it is indeed inserted in the DOM by require.js handling, as visible via the require.js properties.

Explanation: asynchronous loading + library-dependency

In the above displayed App HTML code, you see that in 3rd require.js load handling, a set of libraries are requested for load on the same level. Crucial here is that:
RequireJS uses Asynchronous Module Loading (AMD) for loading files. Each dependent module will start loading through asynchronous requests in the given order. Even though the file order is considered, we cannot guarantee that the first file is loaded before the second file due to the asynchronous nature
In the App code, moment.min.js and moment-timezone.min.js are specified required at same level:
"../Scripts/moment.min.js", "../Scripts/moment-timezone.min.js",
But AMD thus does not guarantee that moment.min.js is loaded BEFORE moment-timezone.min.js; And as moment-timezone.min.js on its turns includes “define(["moment"],b)”, require.js resolves this to load the moment.js library in case not yet loaded. This explains why it does not always occur: sometimes moment.min.js is already loaded, sometimes not…

Solution

There are 2 alternative approaches to resolve the behavior. Essence of both is to make sure that moment.min.js is loaded before the dependent library moment-timezone.min.js:
  1. Extend on the above code-pattern of explicit separating the load of libraries: still retrieve moment.min.js in the 3rd level, and move the load of moment-timezone.min.js to a 4th level:
    require(['../Scripts/AppHelpers'], function () { var spHostUrl = decodeURIComponent(getQueryStringParameter('SPHostUrl')); var hostProtocol = spHostUrl.split("//")[0]; var hostRoot = spHostUrl.split("//")[1].split("/")[0]; spHostUrl = hostProtocol + "//" + hostRoot; require([spHostUrl + '/Style%20Library/Scripts/jquery-1.11.1.min.js'], function () { require([spHostUrl + '/_layouts/15/MicrosoftAjax.js', spHostUrl + '/_layouts/15/init.js', spHostUrl + '/_layouts/15/sp.runtime.js', spHostUrl + '/_layouts/15/sp.js', spHostUrl + '/_layouts/15/sp.requestexecutor.js', spHostUrl + '/_layouts/15/sp.core.js', spHostUrl + '/_layouts/15/sp.init.js', spHostUrl + '/_layouts/15/ScriptResx.ashx?culture=en%2Dus&name=SP%2ERes', spHostUrl + '/_layouts/15/sp.ui.dialog.js', "../Scripts/jquery.rotate.js", "../Scripts/moment.min.js", function () { require(["../Scripts/moment-timezone.min.js", "../Scripts/ListController.js", "../Scripts/UserSettings.js", "../Scripts/sp.communica.js", "../Scripts/App.js"], function () { jQuery(document).ready(function () { initialize(function () { }); }); }); });
  2. Configure Require.js to be aware of the Module dependency
    requirejs.config({ shim: { 'moment-timezone.min': ['moment.min'] } });

Friday, April 17, 2015

CQWP, PictureLibrary and Blobcache

ContentByQueryWebPart is a very usable ‘tool’ to display all types of SharePoint data in a page. Listdata, document libraries and also picture library. But be aware that CQWP has some strange default behaviour wrt picture library. CQWP by default displays the preview images, instead of the actual images.
This also has several performance ramifications:
  1. On server side: actual images can be cached in blobcache, but preview images and also thumbnails are by SharePoint design not cached in blobcache. The reasoning is that blobcache is for content that is multiple times (often) retrieved, while preview images are typically only of interest for content manager upon functional management of the picture library contents. As result, each request for a preview image means that SharePoint needs to retrieve it from the content database.
  2. On client, network + server side: browser cache can be applied to avoid the browser over and over requesting same image. But the browser then still needs to query the server whether the cached resource is unmodified at the server (response 304). For images / static resources that do not change (often), even this request can be avoided: minimizing request/response handling between client (browser), server, and the network transfer. Browsers support this via ‘max-age’ setting. SharePoint supports this ‘max-age’ setting for SharePoint content, via… Blobcache. For SharePoint content retrieved outside blobcache, as thus preview images, the max-age value is not set in the http response. As result, the browser will query the remote SharePoint server whether the image is unmodified, and the server responds with '304 NotModified'. And this can end-up in some noticeable latency, dependent on how busy the SharePoint server is:
Solution is to modify the CQWP configuration to retrieve and display the actual image. This comprises of 2 parts:
  1. In the ItemStyle.xsl, change the rendering specification to display ‘EncodedAbsUrl’ iso ‘ImageUrl’;
  1. And you need to modify the ‘CommonViewFields’ specification of CQWP instance, to also include ‘EncodedAbsUrl’.

Friday, April 10, 2015

ScriptEditor reloads script-links

On the homepage of our SharePoint 2013 intranet we allow employees to install apps. One of them is for internal company ‘fun and facts’: Did you know? The global design of this app is a SharePoint list with the ‘fun and facts’, and jQuery script to get selected listitems via listdata.svc and bind the returned JSON data to html elements. The jQuery script is deployed to the Style Library in the site collection, and the link to this script file is provisioned to the page via a ScriptEditor webpart:
<div id="DW-DYK-Container"></div> <script type="text/javascript" src="/Style Library/Scripts/DidYouKnow.js" ></script>
For performance tuning I regularly monitor via Fiddler the http requests that the browser sends for a page visit. With the modern web applications constructed partly as html+css+javascript, the number of http requests can grow to a larger set as one is aware. In the Fiddler trace, I spotted the browser request for the script file ‘/Style Library/Scripts/DidYouKnow.js’, on which SharePoint responds with ‘304 NonModified. But to my surprise I also noticed a second Http Get request for the same script file, and this url is made unique to force renewed retrieval from the server.
I analyzed what causes this second request for the script file. My finding is that it is caused by the ScriptEditor class. The above ScriptEditor content is rendered to the following html:
SharePoint 2013 includes Embed Code handling that detects ‘orphan’ client script code in the page html, and does some magic with that. Part of that magic apparently is to force the (re)loading of script-links that are within the ‘ms-rte-embedcode’ div-element, via an equivalent of jQuery’s getScript() method (that also adds an unique part to the url to force script file reload from the server). However, from performance and in particular latency perspective, I dislike this behaviour: I do not want the extra request and certainly not the everytime renewed retrieval of the script-file of which I know it remains stable. So I came up with an approach to break out of the ‘magic’ of ScriptEditor: move the ‘script’ element outside of the ‘ms-rte-embedcode’ div-elements:
<div id="DW-DYK-Container"></div> <script language='javascript' type='text/javascript'> var head = document.getElementsByTagName('head').item(0); var script = document.createElement('script'); script.setAttribute('type', 'text/javascript'); script.setAttribute('src', '/Style Library/Scripts/DidYouKnow.js'); head.appendChild(script); </script>
I admit: the code is more complex as the initial one. But the result is what I want: no more duplicate and repetitive forced retrieval of the script file.