Copied to clipboard

Flag this post as spam?

This post will be reported to the moderators as potential spam to be looked at


  • Genc Kastrati 86 posts 401 karma points
    Jun 28, 2017 @ 10:09
    Genc Kastrati
    0

    Database restore to local host time out and Content Transfer timeout to live

    Hello all,

    I am trying to do a database restore on my localhost after the Courier upgrade last night and it seems to not be working. This is the message I see in the logs:

    2017-06-28 12:00:45,404 [21] INFO  Umbraco.Courier.Core.BackgroundTaskRunner - [Thread 21]  Task added Concorde.CacheHandler.Tasks.DownloadFromRemoteTask+ThrottledUiRunnerTask
    2017-06-28 12:01:55,484 [21] INFO  Umbraco.Courier.Core.BackgroundTaskRunner - [Thread 21]  Task added Concorde.CacheHandler.Tasks.DownloadFromRemoteTask+ThrottledUiRunnerTask
    2017-06-28 12:03:15,511 [21] DEBUG Umbraco.Courier.Persistence.V6.NHibernate.NHibernateProvider - [Thread 21] Closing session, UseGlobalSession: False, key: 84b6f649-c1f3-4508-ad6a-a720fdbbe579
    2017-06-28 12:03:15,686 [21] INFO  Umbraco.Courier.Core.BackgroundTaskRunner - [Thread 21] Shutdown
    2017-06-28 12:03:15,686 [21] INFO  Umbraco.Courier.Core.BackgroundTaskRunner - [Thread 21] Application is shutting down immediately
    2017-06-28 12:03:15,686 [33] INFO  Umbraco.Courier.Core.BackgroundTaskRunner - [Thread 33] Shutdown
    2017-06-28 12:03:46,503 [21] ERROR Umbraco.Courier.Core.TaskManager - [Thread 21] Error SequentialRevisionTaskList with id: 712d229b-ba6d-4c66-8cf7-8b4489e44938
    System.Net.WebException: The operation has timed out
       at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request)
       at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
       at Umbraco.Courier.RepositoryProviders.WebServiceProvider.RepositoryWebservice.PackageBatch(String sessionKey, ItemIdentifier[] itemIds)
       at Umbraco.Courier.RepositoryProviders.CourierWebserviceRepositoryProvider.Package(ItemIdentifier[] itemIds)
       at Concorde.CacheHandler.Tasks.DownloadFromRemoteTask.ProcessItemIds(String folderPath, ItemIdentifier[] itemIds, Dictionary`2 filesToAdd)
       at Concorde.CacheHandler.Tasks.DownloadFromRemoteTask.ProcessOrphans(Dictionary`2 filesToAdd)
       at Concorde.CacheHandler.Tasks.DownloadFromRemoteTask.Run()
       at Umbraco.Courier.Core.Tasks.SequentialRevisionTaskList.Run()
       at Umbraco.Courier.Core.BackgroundTaskRunner`1.ConsumeTaskInternal(T task)
    2017-06-28 12:03:46,504 [21] ERROR Umbraco.Courier.Core.BackgroundTaskRunner - [Thread 21] An error occurred consuming task
    System.Net.WebException: The operation has timed out
       at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request)
       at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
       at Umbraco.Courier.RepositoryProviders.WebServiceProvider.RepositoryWebservice.PackageBatch(String sessionKey, ItemIdentifier[] itemIds)
       at Umbraco.Courier.RepositoryProviders.CourierWebserviceRepositoryProvider.Package(ItemIdentifier[] itemIds)
       at Concorde.CacheHandler.Tasks.DownloadFromRemoteTask.ProcessItemIds(String folderPath, ItemIdentifier[] itemIds, Dictionary`2 filesToAdd)
       at Concorde.CacheHandler.Tasks.DownloadFromRemoteTask.ProcessOrphans(Dictionary`2 filesToAdd)
       at Concorde.CacheHandler.Tasks.DownloadFromRemoteTask.Run()
       at Umbraco.Courier.Core.Tasks.SequentialRevisionTaskList.Run()
       at Umbraco.Courier.Core.BackgroundTaskRunner`1.ConsumeTaskInternal(T task)
    

    Does anyone have an idea on how to fix this? Or is this a bug?

    Thank you! Genc

  • Dennis Aaen 4499 posts 18254 karma points admin hq c-trib
    Jun 28, 2017 @ 10:16
    Dennis Aaen
    0

    Hi Genc.

    When you say database restore did you mean a content restore from the backoffice or?

    Please let me know then I will see what I can do to help you out.

    Best,

    /Dennis

  • Genc Kastrati 86 posts 401 karma points
    Jun 28, 2017 @ 10:17
    Genc Kastrati
    0

    Hi Dennis! Hope you are well!

    Yes, indeed, from Content > Restore > then "Download and Restore". Not doing any funky things with the databases :)

    Thx! Genc

  • Dennis Aaen 4499 posts 18254 karma points admin hq c-trib
    Jun 28, 2017 @ 11:45
    Dennis Aaen
    0

    Hi Genc,

    I am doing well thank you.

    I wil try to clone down your site and see if I can do a content restore.

    /Dennis

  • Genc Kastrati 86 posts 401 karma points
    Jun 28, 2017 @ 16:40
    Genc Kastrati
    101

    Hi everyone

    Turns out this was being caused by too many database entries. Here is the rundown of the problem/solution:

    We are running Umbraco 7.4.14 and Courier 3.1.3. The site has around 5000 pages (content nodes) and around 4500 documents. When trying to restore to localhost, it kept failing. When trying to transfer to live, it would fail on pages that had images from the media folder (issuing an SQL timeout error).

    Initially we suspected that Courier timeouts, both for restoring the database, and for transfering content to live, were being caused by the size of the media folder, but this turns out not to be the case. As it happens, it was due to the size of the cmsPropertyData table, and the SQL query that was being run (it took 22 seconds for each query to run, and on pages with multiple media items, this took many many minutes).

    So the solution is...

    WARNING: READ CAREFULLY FROM NOW ON

    After searching around I discovered this great script by Dan Lister on this forum post where he shares the following SQL script for cleaning the database.

    IMPORTANT: I took the script but modified it. Dan's script was looking for a particular nodeObjectType (which will be different for your database). So my script does the exact same thing but without checking for that document type.

    Here it is: clean up Umbraco database script.

    Essentially what it does is truncate the tables, goes through the cmsPropertyData (the main offender - we had almost 900,000 entries), and other tables, picks up the unused versions and puts them in temporary tables, then it deletes those unused versions, and finally deletes the temporary tables and reindexes.

    We ran this on our development server. Once this was done, restore worked. Then we ran it on live. Once this was done, transfering content to live worked as well.

    It is now good as new. Thank you! Genc

  • MuirisOG 382 posts 1284 karma points
    Oct 22, 2019 @ 08:54
    MuirisOG
    0

    Hi Genc,

    Sorry to revisit an old post, but I'm curious about 2 of the tables in this script.

    Do you know what the purpose of the umbracoUser2NodePermission and umbracoCacheInstruction tables?

    I've been using this script a lot lately as I'm about to upgrade a database from 7.5.14 to 7.15.3, and if I don't run this script first, the upgrade process seems to run into problems.

    There are 445K records in the umbracoUser2NodePermission table.

    Thanks Muiris

  • Genc Kastrati 86 posts 401 karma points
    Oct 22, 2019 @ 09:00
    Genc Kastrati
    0

    Hi Muiris, hah no worries that is why these posts are here :)

    I unfortunately do not know what those tables are used for. It might be best to ask Umbraco Cloud support, but I suspect the upgrade process fails because of all those records as it takes too much time?

  • Simon Dingley 1470 posts 3427 karma points c-trib
    Oct 22, 2019 @ 09:18
    Simon Dingley
    0

    The umbracoCacheInstruction table is used in load-balanced environments to synchronise changes between servers. Generally speaking, each time an action is taken on the master node a copy of the action is stored so that it can be processed by the slave servers. This table should be "self-cleaning".

    I believe that the umbracoUser2NodePermission table maps permissions to users e.g. if a user should perhaps be able to create but not delete a specific node. In newer versions of Umbraco this has been replaced by umbracoUserGroup2NodePermission which maps permissions at User Group level.

  • MuirisOG 382 posts 1284 karma points
    Oct 22, 2019 @ 09:23
    MuirisOG
    0

    Thanks Genc, Simon,

    I'll to continue to use this script and rebuild permissions with the use of groups after the upgrade.

    With groups, we're less likely to end up with 445K records.

    (And we don't have a load-balanced environment)

    Muiris

Please Sign in or register to post replies

Write your reply to:

Draft