This project is read-only.

The process exit code was "2" while the expected was "0"



We are facing a weird problem on our Production server with version 3 of ManagePartition,exe. There are about 17 child packages which are simultaneously called by a Master package. However, at a time only 14 out of the 17 packages are running. Each of these child packages has an execute command task which executes ManagePartition,exe with appropriate parameters. The Master package runs daily in the morning. It completes half the times without any problem and fails for the remaining half the number of executions.
The error we get from the child package is as follows:
The process exit code was "2" while the expected was "0"

I tried to run the master package in debug mode and through SQL agent on the day of the failure to mimic the problem, but it cant be reproduced.

Do we know when does the process exit code return a "2" and what does that code mean?
Any help regarding this would be greatly appreciated.

Ritesh B.


rightesh wrote Jul 27, 2015 at 3:29 PM

Hi Everyone,

Looks like we were able to solve the problem where some of the multiple instances of the exe running at the same time were deadlocked and chosen as the victim.
The best way to fix it is to set the lock escalation level to AUTO for all the tables that might use this exe.
Here is the syntax on SQL Server to set it.
ALTER TABLE <schema_name>.<table_name> SET (LOCK_ESCALATION=AUTO)

Earlier, the lock escalation level for these tables was set to TABLE, which is the default value and results in more deadlocks.

You can create a script to set LOCK_ESCALATION=AUTO for all tables in a schema by running the below SQL. You can then run the script to set lock escalation level for all tables in that schema:
select 'ALTER TABLE [' + + '].[' + + '] SET (LOCK_ESCALATION=AUTO)'
from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id
where = '<schema_name>' and lock_escalation_desc ='TABLE'

wrote Jul 27, 2015 at 4:08 PM

lucazav wrote Jul 27, 2015 at 4:08 PM

Thank you for sharing, rightesh!