Article Stuart Salzer · Nov 8, 2016 38m read

What is a core file, and when are they useful?

What is a core file? and When are they use­ful?

The in­for­ma­tion in this doc­u­ment is cur­rent as ver­sions of InterSystems prod­ucts re­leased through 2024-06-06. This up­date date cov­ers er­rors in that dis­covered up to 2024-08-12, but not changes present in new ver­sions of InterSystems prod­ucts.

Nevertheless, the de­tails for ex­ist­ing prod­ucts are not sub­ject to fre­quent change.

The WRC can supply you with a .PDF ver­sion of this article suitable for printing on either US 8½″ × 11″ or A4 210 mm × 297 mm paper.

Table of Contents

Core file ba­sicsSuSE LinuxWindows
AIXUbuntu LinuxTesting
DockermacOS (Darwin)Sanity Test
HP–UXOpenVMSTransmission
RedHat LinuxSolarisIndex

Core file ba­sics

Caché, Ensemble, HealthShare, and InterSystems IRIS data plat­form are very re­li­able. The vast ma­jor­ity of our cus­tomers never ex­pe­ri­ence any kind of fail­ure. However, un­der rare con­di­tions, processes have failed, and in do­ing so have pro­duced a core file (called a process dump file on Windows and OpenVMS). The core file con­tains a de­tailed copy of the state of the process at the in­stant of its fail­ure, in­clud­ing the processes reg­is­ters, and memory (in­clud­ing or ex­clud­ing shared memory de­pend­ing upon con­fig­u­ra­tion de­tails).

The core file is, in essence, an in­stan­ta­neous pic­ture of a fail­ing process at the mo­ment it at­tempts to do some­thing very wrong. From this pic­ture, we cam ex­trap­olate back­ward in time to find the ini­tial mis­take that led to the fail­ure. As we look back in time, our pic­ture of the process be­comes fuzzier. With more de­tailed cores, we can look far­ther back in time be­fore the pic­ture be­comes too fuzzy.

With prop­erly col­lected core files and as­so­ci­ated in­for­ma­tion, we can of­ten solve, and oth­er­wise ex­tract valu­able in­for­ma­tion about the fail­ing process. With an ar­ti­fi­cially in­duced core file, usu­ally all we can say (of­ten af­ter hours of analy­sis) is “I see what hap­pened to this process, some­one ar­ti­fi­cially forced a core of the process.” An ar­ti­fi­cially in­duced core of a mis­be­hav­ing but the ex­tant process can be use­ful as a sec­ondary source of in­for­ma­tion to fill in de­tails of an analy­sis gath­ered from in­for­ma­tion not avail­able in the core.

InterSystems prod­ucts can be con­fig­ured to record full cores on any process fail­ure. This has no im­pact on performance on your day-to-day op­er­a­tion. All you need is to keep a sig­nif­i­cant amount of disk space free for any po­ten­tial, al­beit un­likely, fail­ure. InterSystems has a good record of solv­ing prob­lems when a full core is avail­able. Sometimes we dis­cover it was an ob­scure hard­ware fail­ure that is never go­ing to oc­cur again.

InterSystems prod­ucts can also be con­fig­ured to record lit­tle or no in­for­ma­tion for process fail­ures. While there is no performance ad­van­tage to dis­abling cores, you might find an op­er­a­tional ad­van­tage. Cores files can con­tain sen­si­tive in­for­ma­tion. If you don’t want to have a pol­icy for se­cur­ing core files, you can en­able core files only af­ter re­peated fail­ures.

Out of the box, InterSystems prod­ucts in­stall with an in­ter­me­di­ate ap­proach. That be­ing lim­ited size cores. With these small cores, InterSystems can nor­mally iden­tify a pre­vi­ously solved prob­lem, and maybe solve sim­ple prob­lems. We can’t solve all prob­lems with the de­fault lim­ited cores.

The pri­mary con­trol for de­ter­min­ing the size and type of core you will get is DumpStyle. This is a pa­ra­me­ter in your cache.cpf or iris.cpf file. There are sev­eral other Operating System spe­cific con­trols.

DumpStyle is ex­plained here: http://docs.intersystems.com/iris20241/csp/docbook/DocBook.UI.Page.cls?KEY=RCPF_Dumpstyle. DumpStyle takes an in­te­ger value from 0 to 8 that ap­plies to every process in a Caché, Ensemble, HealthShare, or InterSystems IRIS data plat­form in­stance, and de­fines what kind of core (or process dump) file is saved should a process en­counter a se­ri­ous er­ror. The de­fined val­ues are:

CodeNamePlatformResults
0NORMALUnixProduces full core (de­pend­ing upon other set­tings).
OpenVMSProduces CACCVIO-pid.LOG (of lim­ited value).
WindowsProduces pid.dmp (of lim­ited value).
1FULLUnixProduces full core (de­pend­ing upon other set­tings).
OpenVMSProduces CACHE.DMP (pos­si­bly very large).
WindowsProduces pro­ductfpid.dmp (pos­si­bly very large).
2DEBUGUnixProduces core with shared memory omit­ted. Prior ver­sions of this doc­u­ment claimed this op­tion was dep­re­cated. It is not dep­re­cated. However, the OS-spe­cific meth­ods to omit shared memory are more flex­i­ble.
OpenVMSUnimplemented.
WindowsReserved to InterSystems.
3INTERMEDIATEUnixUnimplemented.
OpenVMSUnimplemented.
WindowsEffective 2014.1, pro­duces pro­ductipid.dmp.
4MINIMALUnixUnimplemented.
OpenVMSUnimplemented.
WindowsEffective 2014.1, pro­duces pro­ductmpid.dmp.
5NOHANDLERUnixDo not reg­is­ter a sig­nal han­dler. Leave all de­ci­sions about core cre­ation up to the op­er­at­ing sys­tem.
OpenVMSUnimplemented.
WindowsUnimplemented.
6NOCOREUnixDo not gen­er­ate a core file.
OpenVMSUnimplemented.
WindowsUnimplemented.
7NOFORKUnixCreate a core dump (with shared memory), but do so from the original fail­ing process, not a forked copy of the fail­ing process.
OpenVMSUnimplemented.
WindowsUnimplemented.
8NOFORKNOSHAREUnixCreate a core dump with­out shared memory, but do so from the original fail­ing process, not a forked copy of the fail­ing process.
OpenVMSUnimplemented.
WindowsUnimplemented.
The default DumpStyle is 0 = NORMAL, ex­cept on Windows since Caché 2014.1, where it is 3 = INTERMDIATE.
pro­duct is ei­ther iris or cache, pid is the process id for the fail­ing process in dec­i­mal (ex­cept on OpenVMS where hexa­dec­i­mal pids are used).

So, for this con­trol, set it as fol­lows:

 Limited coresIntermediate coresFull cores
Unix001 (other con­trols ap­ply)
OpenVMS001
Window431

There are three ways to change the value of DumpStyle. They are:

Place this sec­tion in your cache.cpf or iris.cpf file, you will need to use your Operating System’s text ed­i­tor for this:
[Debug]
dumpstyle=1
The num­ber af­ter the equals sign is the new de­fault DumpStyle. Restart Caché, Ensemble, HealthShare, or InterSystems IRIS data plat­form. This is ef­fec­tive for all processes, and de­fines a new de­fault for all processes if you don’t over­ride with method ② or ③ be­low.
Issue the com­mand:
SET old=$SYSTEM.Config.ModifyDumpStyle(1)
The num­ber in paren­the­sis is the new value for DumpStyle. The old value is re­turned. This com­mand is ef­fec­tive for all new processes cre­ated af­ter it is run. Existing processes con­tinue to run with their prior DumpStyle. This com­mand be­came ef­fec­tive with Caché 2014.1. For older ver­sions, you can use this com­mand:
VIEW $ZUTIL(40,2,165):-2:4:1
Where the new value for DumpStyle is the fi­nal digit.
Issue this com­mand, or place it in your ap­pli­ca­tion:
VIEW $ZUTIL(40,1,48):-1:4:1
Where the new value for DumpStyle is the fi­nal digit. This is ef­fec­tive only for the process is­su­ing the com­mand, and over­rides meth­ods ① and ②.

An of­ten asked ques­tion is how large my cores will be? The an­swer is the amount of [dirty] memory used by the process at the time of fail­ure, plus a lit­tle more to de­scribe that memory’s lay­out. Unfortunately, there are no sim­ple formulæ to com­pute that size ac­cu­rately. The best es­ti­mate de­pends upon whether or not you will be in­clud­ing shared memory.

Start with this:

size = base + heap + extra +gmheap + rou­tine + global
 if shared memory is in­cluded
where baseis the base amount of memory needed. Start with the size of the cache[.exe] or irisdb[.exe] im­age.
heapis the memory used by lo­cal vari­ables your ap­pli­ca­tion cre­ates. Estimate this by tak­ing the difference of the sys­tem vari­able $STORAGE when your ap­pli­ca­tion starts and deep in­side the most memory in­tense loop.
extrais for some fea­tures that re­quire extra memory. There is no de­fin­i­tive list, but $SORTBEGIN() and MERGE are well known to use extra memory.
gmheapis from the [config] gmheap= sec­tion on your cache.cpf or iris.cpf file. This value ap­pears in the con­fig­u­ra­tion file in kio, so mul­ti­ply by 1024. Skip this if you in­tend to ex­clude shared memory.
rou­tineis the sum of all the val­ues from the [config] rou­tines= sec­tion of your cache.cpf or iris.cpf file. This value ap­pears in the con­fig­u­ra­tion file in Mio, so mul­ti­ply by 1048576. Skip this if you in­tend to ex­clude shared memory.
dac­counts for the need to de­scribe memory used by global. This value will some­what greater than one. This ac­tual value will vary among dif­fer­ent ver­sions, plat­forms, and the global buffer size you choose. For all 8 kio buffers on the InterSystems IRIS data plat­form on AIX, the value is about 1.05.
globalis the sum of all the val­ues from the [config] globals= sec­tion of your cache.cpf or iris.cpf file. This value ap­pears in the con­fig­u­ra­tion file in Mio, so mul­ti­ply by 1048576. Skip this if you in­tend to ex­clude shared memory.

Note: As a prac­ti­cal mat­ter, On most large pro­duc­tion de­ploy­ments, global is large enough that it dwarfs all other fac­tors. To save core files with shared memory in a typ­i­cal large pro­duc­tion de­ploy­ment, size = 1.25 × global is a rea­son­able es­ti­mate.

If you are con­cerned about the amount of disk space needed to store a core file, con­sider:

  • If you will be run­ning on cloud ser­vice, and you don’t want to pay to keep a large amount of disk space re­served, you may have to ac­cept lim­ited core files. That is, core files with­out shared memory.
  • If you will be run­ning on a server that you con­trol but don’t want to re­serve a large amount of disk space on your ex­pen­sive disk ar­ray, you can pur­chase a cheap USB disk. You don’t need a fast or re­dun­dant disk for core files. You may want to at­tach a hasp sta­ple onto the cheap USB disk with epoxy and pad­lock it to some­thing sub­stan­tial.

Most op­er­at­ing sys­tems have con­trols to redi­rect cores to a com­mon di­rec­tory and con­trol the amount of in­for­ma­tion in cores. These also should be set, and you should con­sider the ram­i­fi­ca­tions for do­ing so, es­pe­cially from a data pri­vacy per­spec­tive. The following sec­tions cover the de­tails for in­di­vid­ual op­er­at­ing sys­tems.

Moving cores to a com­mon di­rec­tory is very use­ful for ca­pac­ity plan­ning, but may also make the cores more ac­ces­si­ble to any­one wish­ing to ex­fil­trate data from your site.

Many types of prob­lems sim­ply can­not be solved with­out in­clud­ing shared memory in the core. Cores that in­clude shared memory tend to be much larger than cores that do not. Most of the difference is the size of your global and rou­tine buffers.

If you are pro­cess­ing sen­si­tive in­for­ma­tion, a core file with­out shared memory will only con­tain the sen­si­tive in­for­ma­tion be­ing processed by the one process that failed. A core file with shared memory will also con­tain all the global vari­ables re­cently ac­cessed by every process. Recently might rep­re­sent min­utes, or con­sid­er­ably longer.

AIX

Full (and mod­ern style) cores should be en­abled with smit:

System Environments
> Change / Show Characteristics of Operating System
> > Enable full CORE dump                               true
> > Use pre-430 style CORE dump                         false

This can also be seen from the com­mand line with:

#lsattr -E -l sys0 | egrep 'fullcore|pre430core'↩
fullcore     true             Enable full CORE dump                 True
pre430core   false            Use pre-430 style CORE dump           True

And set with:

#chdev -l sys0 -a fullcore=true -a pre430core=false -P↩

The -P makes the change per­ma­nent.

By de­fault core files are writ­ten to the de­fault di­rec­tory of the process at the time of process fail­ure. Typically that is the same di­rec­tory as one of your main CACHE.DAT or IRIS.DAT file. This can be changed with smit:

Problem Determination
> Change/Show/Reset Core File Copying Directory

or from the com­mand line with:

#chcore -p on -l /cores -n on -d↩

Insure the file /etc/security/limits, has a sec­tion with the line:

default:
core = -1

Finally, en­sure that by what­ever means you set up en­vi­ron­ment vari­ables for user processes each user has CORE_NOSHM de­fined or not de­fined as de­sired. If CORE_NOSHM=1 is de­fined, core files ex­clude shared memory. If CORE_NOSHM=0 or not de­fined at all, core files in­clude shared memory. The easy way to do this for all users is to edit /etc/environment to in­clude the line:

CORE_NOSHM=1

To as­sign what users have and do not have shared memory cores sup­pressed on an in­di­vid­ual ba­sis, edit one of these files based upon the user and the shell they use:

CORE_NOSHM=1;export CORE_NOSHM    # sh in /etc/profile or $HOME/.profile
export CORE_NOSHM=1               # ksh in /etc/.kshrc or $HOME/.kshrc
export CORE_NOSHM=1               # bash in /etc/bashrc or ~/.bashrc
setenv CORE_NOSHM 1               # csh in ~/.cshrc

Docker

Docker Core file cre­ation for an InterSystems IRIS data plat­form docker con­tainer is con­trolled by the host Linux sys­tem (with a few caveats). You must plan to send core files di­rectly to an op­er­at­ing sys­tem file. That file can be in­side the docker con­tainer, or to a di­rec­tory mapped onto the host Linux sys­tem. The ad­van­tage to send­ing the core file to a di­rec­tory mapped onto the host Linux sys­tem is that it will sur­vive a com­plete fail­ure of the con­tainer.

Since the core file must go to an op­er­at­ing sys­tem file, you must dis­able any ad­vanced core cap­tur­ing soft­ware on the host plat­form. You will want to set /proc/sys/kernel/core_pattern with an ap­pro­pri­ate value for both the host and con­tainer sys­tem. You should choose a rel­a­tively sim­ple di­rec­tory that you know will ex­ist on both the host and con­tainer (/tmp or /cores are the ob­vi­ous best choices). You may also want to in­clude vari­ables to in­sure that cores from mul­ti­ple docker con­tain­ers don’t over­write each other. Thus /cores/core.%p.%e is a good choice.

Host OSdis­ablelink
RedHat LinuxYou must dis­able the Automatic Bug Reporting Tool (ABRT).
SuSE LinuxUp through SuSE Linux Enterprise Server 11, SuSE did not have any ad­vanced core cap­tur­ing soft­ware. So all you need to do is set /proc/sys/kernel/core_pattern per in­struc­tions. However, as yet we do not pro­vide in­struc­tions for dis­abling the ad­vanced core cap­tur­ing soft­ware in SuSE Linux Enterprise Server 12, so SuSE 12 and later ver­sions are cur­rently un­suit­able hosts for docker con­tain­ers.
Ubuntu LinuxYou must dis­able apport.
macOS (Darwin)As yet we do not pro­vide in­struc­tions for cap­tur­ing cores with a macOS host, so macOS hosts are cur­rently un­suit­able hosts for docker con­tain­ers.
WindowsAs yet we do not pro­vide in­struc­tions for cap­tur­ing cores with a Windows host, so Windows hosts are cur­rently un­suit­able hosts for docker con­tain­ers.

When you launch the con­tainer you may want to in­clude the op­tion to map the di­rec­tory you will be us­ing for cores to the host op­er­at­ing sys­tem. Thus:

# docker run ⋯ -v /cores:/cores ⋯ ↩

If you don’t in­clude -v /cores:/cores any core files cre­ated by a process fail­ure in­side the docker con­tainer, will sur­vive only as long as the docker con­tainer is run­ning. If the map­ping given by the -v op­tion is not sym­met­ri­cal, that is the value to the left and right of the colon are dif­fer­ent, you may fail to cap­ture some cores.

Set the corefile size ulimit. Since this is a runtime de­ci­sion, add the following to the docker run com­mand:

# docker run ⋯ --ulimit core=-1 ⋯ ↩

A full com­mand to start a docker con­tainer might look like:

# docker run --init --detach \↩ 
        --volume /cores:/cores \↩
        --ulimit core=-1 \↩
        --volume /home/salzer/volume_host:/volume_con­tainer \↩
        --publish 52773:52773 \↩
        --publish 52443:52443 \↩
        --publish 51773:51773 \↩
        --publish 1972:1972 \↩
        --env ISC_DATA_DIRECTORY=/volume_con­tainer/config_2024 \↩
        --env ICM_SENTINEL_DIR=/volume_con­tainer \↩
        --name iris2024 \↩
        intersystems/iris:2024.1.0.267.2 \↩
        --key /volume_con­tainer/iris_con­tainer2024.key↩

HP–UX

Enable plac­ing cores in a com­mon di­rec­tory with ex­tended nam­ing with:

#coreadm -e global -g /cores/core.%p.%f↩

%p places the pid in the path­name, %f places the name of the ex­e­cutable (such as cache or iris) in the path­name. See:

% man 1m coreadm↩

for more op­tions.

Review if shared memory has been en­abled in core files with:

#/usr/sbin/kctunecore_addshmem_read# /usr/sbin/kctune core_addshmem_write

Change with:

# /usr/sbin/kctune core_addshmem_read=1↩
# /usr/sbin/kctune core_addshmem_write=1↩

1 means en­able, 0 means dis­able. HP–UX di­vides shared memory into two types. In gen­eral InterSystems only uses write shared memory, but we rec­om­mend set­ting both types the same.

On HP–UX the core size is lim­ited by the maxdsiz_64bit ker­nel pa­ra­me­ter. Make sure that it is set high enough that a full core can be gen­er­ated.

Review with:

# /usr/sbin/kctune maxdsiz_64bit↩

Set with:

# /usr/sbin/kctune maxdsiz_64bit=4294967296↩

A user can fur­ther limit their core with a ulimit -c com­mand. This com­mand should be re­moved from /etc/profile, $HOME/.profile, and sim­i­lar files for other shells un­less it is your in­ten­tion to limit core files.

RedHat Linux

If you are run­ning Rhel 6.0 or later (also CentOS), RedHat has added their Automatic Bug Reporting Tool (ABRT). As in­stalled this is not com­pat­i­ble with Caché, Ensemble, HealthShare, or InterSystems IRIS data plat­form. You need to de­cide if you wish to con­fig­ure ABRT to sup­port Caché, Ensemble, HealthShare, InterSystems IRIS data plat­form, or dis­able ABRT.

Below sec­tions la­beledABRTap­ply to use of ABRT,
while sec­tions la­beledAB/RTap­ply to tra­di­tional use with­out ABRT.

ABRT To make InterSystems prod­ucts com­pat­i­ble with ABRT, de­ter­mine the ver­sion of ABRT you are run­ning:

#abrt-cli --ver­sion↩

Edit the ABRT con­fig­u­ra­tion file. The name varies de­pend­ing upon the ver­sion of ABRT:

ABRT 1.x: /etc/abrt/abrt.conf
ABRT 2.x: /etc/abrt/abrt-action-save-package-data.conf

If you in­stalled Caché, Ensemble, or HealthShare with a cinstall com­mand (most com­mon), or InterSystems IRIS data plat­form with an irisinstall com­mand, find the ProcessUnpackaged= line, and change the value to yes.

ProcessUnpackaged = yes

Otherwise, if you in­stalled Caché, Ensemble, HealthShare, or InterSystems IRIS data plat­form from an RPM mod­ule, find the OpenGPGCheck= line, and change the value to no.

OpenGPGCheck = no

Regardless of how you in­stalled Caché, Ensemble, HealthShare, or InterSystems IRIS data plat­form, find the BlackListedPaths= line, and add a ref­er­ence to cstat or irisstat in the in­stal­la­tion/bin di­rec­tory. If the BlackListedPaths= line does not ex­ist, add it at the end with just the cstat or irisstat ref­er­ence.

BlackListedPaths=[retain_ex­ist­ing_list,]in­stal­la­tion_directory/bin/cstat

Save your ed­its, and restart abrtd:

# ser­vice abrtd restart↩

Configured as such, ABRT cre­ates a new di­rec­tory (un­der /var/spool/abrt or /var/tmp/abrt) for each process fail­ure, and in that di­rec­tory, place the core, and as­so­ci­ated in­for­ma­tion.

When a process fail­ure oc­curs, is­sue the com­mand:

# abrt-cli --list↩        # for ABRT 1.x# abrt-cli list↩          # for ABRT 2.x

This will show a list of re­cent process fail­ures, and for each will give a di­rec­tory spec­i­fi­ca­tion. In each di­rec­tory will be a coredump file, along with many other small files that col­lec­tively can be quite use­ful in de­ter­min­ing the cause of the process fail­ure.

From some other di­rec­tory, en­ter the com­mand:

% tar -cvzf wrcnumber-core.tar.gz /var/spool/abrt/directory/*↩

Where wrcnumber is the num­ber InterSystems as­signs to in­ves­ti­gate your case. You can send us the com­pressed wrcnumber-core.tar.gz file.

AB/RT Alternatively, you can dis­able ABRT with:

# ser­vice abrtd stop↩
# ser­vice abrt-ccpp stop↩       # ABRT 2.x only.

To per­ma­nently dis­able ABRT:

#chkconfig abrtd off↩
# chkconfig abrt-ccpp off↩     # ABRT 2.x only.

Finally you need to up­date /proc/sys/kernel/core_pattern, see the next sec­tion.

AB/RT You can con­trol where cores are de­posited (un­less you are us­ing ABRT).

① If you are us­ing ABRT, you must skip this step.
② If you have dis­abled ABRT, you must per­form this step.
③ If you never had ABRT, this step is op­tional.

Edit the file /proc/sys/kernel/core_pattern

In the sim­ple case, just use:

core

It is gen­er­ally use­ful to add the pid, and name of the pro­gram gen­er­at­ing the core with:

core.%p.%e

You might also place the cores in a com­mon directory with:

/cores/core.%p.%e

Verify that all users have write ac­cess to di­rec­tory cho­sen. See man core for more op­tions. You should make this change per­ma­nent by cre­at­ing a file in the di­rec­tory /etc/sysctl.d with a name end­ing with .conf, and con­tain­ing:

kernel.core_pattern=/cores/core.%p.%e

ABRT  AB/RT You should set the /proc/self/coredump_filter to con­trol the amount of memory dumped to the core. This can be in an ap­pro­pri­ate /etc/profile.d/some­thing.sh file. The com­mand is:

# echo 0x33 >/proc/self/coredump_filter↩

The exact bitmap used de­pends upon the level of data you wish to col­lect. The mean­ings of the bits can be found in man core, sam­ples that make sense for InterSystems prod­ucts are:

BitDescriptionNeed for InterSystems
0x01Anonymous pri­vate map­pings.Always needed.
0x02Anonymous shared map­pings.Needed for com­plex prob­lems.
0x04File-backed pri­vate map­pings.Maybe needed for prob­lems with $ZF().
0x08File-backed shared map­pings.Maybe needed for prob­lems with $ZF().
0x010Dump ELF head­ers.Always needed.
0x20Dump pri­vate huge pages.Not cur­rently used by InterSystems.
0x40Dump shared huge pages.Not cur­rently used by InterSystems.
0x80Dump pri­vate DAX pages (Rhel 8).Not cur­rently used by InterSystems.
0x100Dump shared DAX pages (Rhel 8).Not cur­rently used by InterSystems.

As an al­ter­na­tive to plac­ing this in a shell spe­cific script, you can mod­ify this dur­ing boot. These in­struc­tions only ap­ply if you boot with grub2. You can test this with:

# grub2-install --ver­sion↩
grub2-install (GRUB) 2.02~beta2

Edit /etc/default/grub. Change the line that be­gins GRUB_CMDLINE_LINUX_DEFAULT=. If the line doesn’t al­ready ex­ist in the file, just add it at the end. It should con­tain:

GRUB_CMDLINE_LINUX_DEFAULT="oldcmd coredump_filter=newval"

Note: The oldcmd is the old value of GRUB_CMDLINE_LINUX_DEFAULT (omit, if the line didn’t pre­vi­ously ex­ist). newval is the new value for coredump\_filter in hexa­dec­i­mal with a lead­ing “0x”.

Run:

#grub2-mkconfig -o /boot/grub2/grub.cfg↩

ABRT  AB/RT You should set your ulimit -c for all processes to un­lim­ited. This can be set glob­ally in the file /etc/security/limits.conf. Add these two lines:

*       soft    core        unlimited
*       hard    core        unlimited

SuSE Linux

If your are run­ning SuSE Linux Enterprise Server 12 or later, SuSE now stores all cores in the systemd jour­nal. Core files stored in the systemd jour­nal are tran­si­tory. They do not sur­vive a sys­tem re­boot. Cores, if needed, must be ex­tracted from the systemd jour­nal be­fore any sys­tem re­boot.

To list the core files cur­rently in the systemd jour­nal:

#[systemd-]coredumpctl list

To ex­tract a core se­lected by the pid that cre­ated the core:

#[systemd-]coredumpctl -o core.morename dump
pid↩

Note: the systemd- pre­fix was re­moved from the com­mand name ef­fec­tive with SuSE 12-SP2.) It is rec­om­mended that you leave this systemd be­hav­iour in place, and not at­tempt to de­feat it.

If you are run­ning an older ver­sion of SuSE Linux Enterprise (11 or ear­lier), you can con­trol where cores are de­posited by edit­ing the file /proc/sys/kernel/core_pattern

In the sim­ple case, just use:

core

It is gen­er­ally use­ful to add the pid, and name of the pro­gram gen­er­at­ing the core with:

core.%p.%e

You might also place the cores in a com­mon di­rec­tory with:

/cores/core.%p.%e

Verify that all users have write ac­cess to di­rec­tory cho­sen. See man core for more op­tions.

You can make this change per­ma­nent by ap­pend­ing these lines to the file /etc/sysctl.conf:

# Make this core pat­tern per­ma­nent (SuSE 12 breaks this, don’t use):
kernel.core_pattern=/cores/core.%p.%e

You should set the /proc/self/coredump\_filter to con­trol the amount of memory dumped to the core. This can be in an ap­pro­pri­ate /etc/profile.d/some­thing.sh file. The com­mand is:

# echo 0x33 >/proc/self/coredump_filter↩

The exact bitmap used de­pends upon the level of data you wish to col­lect. The mean­ings of the bits can be found in man core, sam­ples that make sense for InterSystems prod­ucts are:

BitDescriptionNeed for InterSystems
0x01Anonymous pri­vate map­pings.Always needed.
0x02Anonymous shared map­pings.Needed for com­plex prob­lems.
0x04File-backed pri­vate map­pings.Maybe needed for prob­lems with $ZF().
0x08File-backed shared map­pings.Maybe needed for prob­lems with $ZF().
0x010Dump ELF head­ers.Always needed.
0x20Dump pri­vate huge pages.Not cur­rently used by InterSystems.
0x40Dump shared huge pages.Not cur­rently used by InterSystems.
0x80Dump pri­vate DAX pages (SuSE 15).Not cur­rently used by InterSystems.
0x100Dump shared DAX pages (SuSE 15).Not cur­rently used by InterSystems.

As an al­ter­na­tive to plac­ing this in a shell spe­cific script, you can mod­ify this dur­ing boot. To do this use yast2. The user in­ter­face for yast2 will vary de­pend­ing upon whether you are con­nected with a ter­mi­nal in­ter­face (it will use a curses in­ter­face), or a GUI in­ter­face. These in­struc­tions try to be in­ter­face ag&sny;nos­tic.

After launch­ing yast2, se­lect System ⟶ Boot Loader from the menu.
Select the Kernel Parameters tab.
Look for the Optional Kernel Command Line Parameter field.
If the field does not al­ready con­tain coredump_filter=0xvalue, ap­pended it to the field with a space sep­a­ra­tor. If it al­ready con­tains the as­sign­ment, sim­ply edit value.
Exit the menu sys­tem, and re­boot.

You should set your ulimit -c for all processes to un­lim­ited. This can be set glob­ally in the file /etc/security/limits.conf. Add these two lines:

*       soft    core        unlimited
*       hard    core        unlimited

Note: It may benecessary to dis­able AppArmor, which blocks ap­pli­ca­tion be­hav­iour which it con­sid­ers un­usual, and writ­ing to a core file may be con­sid­ered un­usual.

# rcapparmor stop↩

Ubuntu Linux

Ubuntu LINUX Ubuntu uses apport to trap all process fail­ures, and for pack­ages added with its in­stal­la­tion pack­age, cre­ate apport re­ports which con­tain en­coded and com­pressed cores with ad­di­tional in­for­ma­tion. It is pos­si­ble ask apport to process unpackaged code, that is ap­pli­ca­tions not in­stalled with Ubuntu’s pack­age man­ager. Unfortunately, in do­ing so, Canonical treats the apport re­ports cre­ated for unpackaged code as some­thing it can ex­am­ine for the im­prove­ment of Ubuntu.

Since it is pos­si­ble to ex­tract your data from an apport re­port, you al­most cer­tainly do not want to en­able apport pro­cess­ing of unpackaged code. Your only choice is to dis­able apport.

Use these commands:

# systemctl stop apport.ser­vice↩ 
# systemctl dis­able apport.ser­vice↩
# systemctl mask apport.ser­vice↩

Create a file /etc/sysctl.d/30-core-pattern.conf (or any sim­i­lar name in that di­rec­tory). In that file place:

kernel.core_pattern=/cores/core.%p.%e

Insure that the di­rec­tory you spec­ify for sav­ing cores is pub­licly writable, and has suf­fi­cient disk space. See man core for more op­tions.

You should set the /proc/self/coredump\_filter to con­trol the amount of memory dumped to the core. This can be in an ap­pro­pri­ate /etc/profile.d/some­thing.sh file. The com­mand is:

# echo 0x33 >/proc/self/coredump_filter↩

The exact bitmap used de­pends upon the level of data you wish to col­lect. The mean­ings of the bits can be found in man core, sam­ples that make sense for Caché are:

BitDescriptionNeed for InterSystems
0x01Anonymous pri­vate map­pings.Always needed.
0x02Anonymous shared map­pings.Needed for com­plex prob­lems.
0x04File-backed pri­vate map­pings.Maybe needed for prob­lems with $ZF().
0x08File-backed shared map­pings.Maybe needed for prob­lems with $ZF().
0x010Dump ELF head­ers.Always needed.
0x20Dump pri­vate huge pages.Not cur­rently used by InterSystems.
0x40Dump shared huge pages.Not cur­rently used by InterSystems.
0x80Dump pri­vate DAX pages (16.04LTS).Not cur­rently used by InterSystems.
0x100Dump shared DAX pages (16.04LTS).Not cur­rently used by InterSystems.

As an al­ter­na­tive to plac­ing this in a shell spe­cific script, you can mod­ify this dur­ing boot. These in­struc­tions only ap­ply if you boot with grub2. You can test this with:

# grub-install --ver­sion↩
grub-install (GRUB) 2.02-2ubuntu8.12

Edit /etc/default/grub. Change the line that be­gins GRUB\_CMDLINE\_LINUX\_DEFAULT=. If the line doesn’t al­ready ex­ist in the file, just add it at the end. It should con­tain:

GRUB_CMDLINE_LINUX_DEFAULT="oldcmd coredump_filter=newval"

Note: The oldcmd is the old value of GRUB\_CMDLINE\_LINUX\_DEFAULT (omit, if the line didn’t pre­vi­ously ex­ist). newval is the new value for coredump\_filter in hexa­dec­i­mal with a lead­ing “0x”.

Run:

# grub-mkconfig -o /boot/grub2/grub.cfg↩

You should set your ulimit -c for all processes to un­lim­ited. This can be set glob­ally in the file /etc/security/limits.conf. Add these two lines:

*                soft    core            unlimited
*                hard    core            unlimited

macOS (OS X, Darwin)

Mac OS X was re­named OS X, and it was later re­named macOS. All these op­er­at­ing sys­tems are Apple’s pro­pri­etary user in­ter­face lay­ered upon Darwin, an op­er­at­ing sys­tem that Apple de­rived from BSD Unix and the­o­ret­i­cally, re­leased to the pub­lic do­main. Nevertheless, Apple re­leases Darwin in such a way, that as a prac­ti­cal mat­ter no one will ever run just Darwin.

InterSystems prod­ucts only re­quire Darwin, but since Darwin isn’t prac­ti­cally avail­able, all in­struc­tions are based upon the full Apple Mac OS X, OS X, or macOS.

macOS in­cludes CrashReporter. A tool that au­to­mat­i­cally in­ter­cepts process fail­ures, pack­ages the fail­ure de­tails as text logs, and sends the data to Apple for Analysis. CrashReporter will cap­ture process fail­ure de­tails for third-party soft­ware, such as Caché, Ensemble, HealthShare, and InterSystems IRIS data plat­form. Which, in the­ory, Apple might for­ward to InterSystems.

InterSystems does not re­ceive CrashReporter logs form Apple, nor have we de­vel­oped the abil­ity to an­a­lyze them. InterSystems works strictly form core files. Fortunately, CrashReporter works in­de­pen­dently from core file cre­ation. That is, it is pos­si­ble to to process a process fail­ure through nei­ther, ei­ther, or both CrashReporter, and core file cre­ation.

CrashReporter preferences can be set in System Preferences → Security & Privacy, Privacy tab. The panel name and se­lec­tion of boxes varies from ver­sion to ver­sion. In Mac OS X 10.4, the panel was called just Security, and there were no rel­e­vant check boxes. In those older ver­sion the user was al­ways pre­sented with a di­a­log box on any process fail­ure, and asked if they wanted to send the data to Apple for analy­sis.

Depending upon the sen­si­tiv­ity of the data you processes, you may want to untick all the op­tions re­lated to CrashReprter.

The method for en­abling cores in macOS has un­der­gone sig­nif­i­cant changes from ver­sion to ver­sion. See the following chart, and use the ap­pro­pri­ate method for your ver­sion.

ReleaseCodeNameInterSystems ver­sionsMethod
Public BetaKodiakun­sup­portedMethod 1: Edit /hostconfig
Mac OS X 10.0Cheetahun­sup­ported
Mac OS X 10.1Pumaun­sup­ported
Mac OS X 10.2Jaguarun­sup­ported
Mac OS X 10.3PantherCaché (PowerPC) 5.0, 5.1
Mac OS X 10.4TigerCaché (PowerPC or x86 as marked) 5.0PowerPC, 5.1PowerPC, 5.2*, 2007.1*, 2008.1x86, 2008.2x86, 2009.1x86Method 2: Edit /etc/launchd.conf
Mac OS X 10.5LeopardCaché (x86) 2008.1, 2008.2, 2009.1, 2010.1
Mac OS X 10.6Snow LeopardCaché (x86–64) 2010.1, 2010.2, 2011.1, 2012.1, 2012.2
Mac OS X 10.7LionCaché (x86–64) 2011.1, 2012.1, 2012.2 , 2013.1, 2014.1
OS X 10.8Mountain LionCaché (x86–64) 2012.2, 2013.1, 2014.1, 2015.1
OS X 10.9MavericksCaché (x86–64) 2013.1, 2014.1, 2015.1, 2015,2, 2016.1, 2016.2
OS X 10.10YosemiteCaché (x86–64) 2014.1, 2015.1, 2015,2, 2016.1, 2016.2Method 3: Not au­to­matic.
OS X 10.11El CapitanCaché (x86–64) 2016.1, 2016.2. 2017.1DEV, 2017.2DEV, 2018.1DEV
macOS 10.12SierraCaché (x86-64) 2017.1, 2017.2, 2018.1
macOS 10.13High SierraCaché 2018.1, IRIS 2018.1, 2019.1, 2019.2, 2019.3, 2019.4, 2020.1, 2020.2, 2020.3, 2021.1
macOS 10.14MojaveIRIS 2019.1, 2019.2, 2019.3, 2019.4, 2020.1, 2020.2, 2020.3, 2021.1
macOS 10.15Catalinaun­sup­ported
macOS 11Big SurIRIS 2022.1, 2023.1x86–64
macOS 12MontereyIRIS 2022.1ARM, 2023.1
macOS 13VenturaIRIS 2023.1, 2024.1
macOS 14SonomaIRIS 2024.1
macOS 15Sequoiaun­re­leased

Method 1: For ver­sions OS X 10.3 (Cheetah), and prior un­sup­ported ver­sions: Edit the file /hostconfig. Find the line COREDUMPS=, and change the value to -YES-.

Method 2: For ver­sions OS X 10.4 (Tiger) to OS X 10.9 (Mavericks), edit the file /etc/launchd.conf, and add the line:

limit core unlimited

And re­boot.

Method 3: For ver­sions OS X 10.10 (Yosemite) and newer, /etc/launchd.conf is elim­i­nated. Core file gen­er­a­tion is now half dis­abled. Either users must en­able cores for each process with:

% ulimit -c unlimited↩

Prior to run­ning their ap­pli­ca­tion, a priv­i­leged user must run:

# launchctl limit core unlimited↩

Then lo­gout, and lo­gin again prior to start­ing Caché. Apple specif­i­cally does not pro­vide a good way to au­to­mate this, as they con­sider the de­fault gen­er­a­tion of a core file to be a po­ten­tial se­cu­rity vul­ner­a­bil­ity.

Apple does pro­vide a way to to­tally dis­abling core file gen­er­a­tion. This is done by edit­ing the file /etc/sysctl.conf, and adding the line:

kern.coredump=0

It can be re-en­abled, by re­mov­ing the line, or chang­ing the value to 1.

OpenVMS

OpenVMS By de­fault Caché, and Ensemble will only pro­duce CACCVIO-pid.LOG files for fail­ing processes. With these only rel­a­tively sim­ple prob­lems can be solved. These CACCVIO-pid.LOG files will al­ways be placed in the processes de­fault di­rec­tory (typ­i­cally the di­rec­tory of a CACHE.DAT file), and can only be redi­rected by chang­ing the processes de­fault di­rec­tory.

Caché, and Ensemble may also pro­duce CERRSAVE-pid.LOG files. These are sim­i­lar to CACCVIO-pid.LOG files. Usually, you do not need to con­cern your­self with the difference. In some cases Caché, and Ensemble will pro­duce both files in re­sponse to a fail­ure. In all cases seen so far, the CACCVIO-pid.LOG file is pro­duced first with the full con­text of the er­ror, while the CERRSAVE-pid.LOG file is pro­duced dur­ing fi­nal run­down of the process, and con­tains com­par­a­tively lit­tle in­for­ma­tion of value.

If ex­tended process dumps (FULL dumps) are en­abled, they too will be placed in the process de­fault di­rec­tory. However they can be redi­rected, by defin­ing the log­i­cal name SYS$PROCDMP to point to a di­rec­tory in which to store the process dump. This log­i­cal name can be de­fined at the /SYSTEM level. The file name will be CACHE.DMP or CSESSION.DMP.

OpenVMS also pro­vides the log­i­cal name SYS$PROTECTED_PROCDMP. You should also de­fine that log­i­cal name with both /EXECUTIVE_MODE and /SYSTEM. This ap­plies to process fail­ures of priv­i­leged im­ages, and parts of Caché are priv­i­leged. The OpenVMS doc­u­men­ta­tion will ad­vise you to de­fine the two log­i­cal names to dif­fer­ent di­rec­to­ries, and place higher se­cu­rity on di­rec­tory cor­re­spond­ing to SYS$PROTECTED_PROCDMP. This is based upon the as­sump­tion that that the data processed by priv­i­leged im­ages is more sen­si­tive than that processed by non-priv­i­leged im­ages. If both are sen­si­tive, it is ok to point both log­i­cal names to the same di­rec­tory.

There is a his­tory of de­fects ef­fect­ing the cre­ation of CACCVIO-pid.LOG and CERRSAVE-pid.LOG files as well as full process dumps. These are the most im­por­tant changes.

ChangeFirst ver­sionDescription
JLC1809Caché 2015.2Prior to this change most CERRSAVE-pid.LOG files were use­less.
JO2422Caché 2012.1Prior to this change con­di­tions that would gen­er­ate a CERRSAVE-pid.LOG file al­ways cre­ated the lim­ited in­for­ma­tion file ig­nor­ing DumpStyle.
JLC1326Caché 2011.1Prior to this change reg­is­ters were not in­cluded in CACCVIO-pid.LOG, and CERRSAVE-pid.LOG files on the Itanium plat­form. This se­ri­ously ham­pered our abil­ity to solve all but sim­ple prob­lems with these files. We could still match with al­ready solved prob­lems.
JLC931 and JLC959Caché 2007.2Prior to these changes no use­ful in­for­ma­tion was recored in CACCVIO-pid.LOG, and CERRSAVE-pid.LOG files on the Itanium plat­form.
JO1968Caché 5.2Prior to this change con­di­tions that would gen­er­ate a CACCVIO-pid.LOG file al­ways cre­ated the lim­ited in­for­ma­tion file ig­nor­ing DumpStyle.

Solaris

Solaris You can en­able plac­ing cores in a com­mon di­rec­tory with ex­tended nam­ing with:

#coreadm -e global -g /cores/core.%p.%f -G all↩
%p places the pid in the path­name.
%f places the name of the ex­e­cutable (such as cache) in the path­name.
The -G all in­cludes all types of memory, that is a full core. Omit this for a de­fault core that still in­cludes most shared memory. The following things can be stored in the core:
CodeInterSystems us­ageIn de­fault
stackNeededyes
heapNeededyes
shmNot usedyes
ismNot usedyes
dismCaché shared memoryyes
textUseful for $ZF() fail­uresyes
dataNeededyes
rodataNot usedyes
anonNeededyes
shanonGenerally smallyes
ctfNeededyes
symntabUseful for $ZF() fail­uresno
shfileNot usedno

all in­cludes all types of memory, default in­cludes all but the last two. If you want sig­nif­i­cantly smaller cores (to save space at the ex­pense of mak­ing fewer prob­lems solv­able), the most space is saved by re­mov­ing dism shared memory. Do this with:

# coareadm -e Global -g /cores/core.%p.%f -G (default-dism)↩

See:

% man 1m coreadm↩

for more op­tions.

By de­fault users have

% ulimit -c unlimited↩

You may use the ulimit (or limit com­mand in csh) to dis­able cores, but coreadm is gen­er­ally more flex­i­ble. So you should in­sure ulimit commands don’t ap­pear in /etc/profile or $HOME/.profile, or cor­re­spond­ing files for other shells.

Windows

The in­for­ma­tion to be in­cluded in a dumpfile for Windows is fully con­trolled by the DumpStyle pa­ra­me­ter in the cache.cpf file (or other in­ter­face to chang­ing DumpStyle de­fined above.

Testing

Local se­cu­rity setup among other prob­lems can pre­vent a core from ac­tu­ally be­ing writ­ten. It can be very use­ful to test if a core will ac­tu­ally be cre­ated un­der real-world con­di­tions. To do that, en­ter the com­mand:

USER>DO $ZUTIL(150,"DebugException")↩

To be cer­tain, you should test this state­ment in­ter­ac­tively, in­side JOBs (as­sum­ing your ap­pli­ca­tion uses the JOB com­mand), and even hid­ing in­side an op­tion of your ap­pli­ca­tion that your users will not ac­ci­den­tally se­lect. Verify that you get a core file, and fol­low the san­ity check in the next sec­tion to ver­ify that it is a good core file.

Sanity Test

Core files (and process dumps) can be quite large, and they can con­tain sen­si­tive in­for­ma­tion. Before trans­mit­ting a core file to InterSystems for analy­sis, it is best to per­form a san­ity test of core file on the sys­tem that gen­er­ated it, or a very sim­i­lar sys­tem. Based upon your op­er­at­ing sys­tem, please per­form the following san­ity test:

OSSanity test
AIX
# dbx cache core(dbx) set $stack_de­tails↩
(dbx) where↩
(dbx) quit↩
Send us the out­put from the above commands when open­ing a prob­lem with the WRC. If you do not have dbx in­stalled on your sys­tem, just open a new prob­lem.
HP–UX
# gdb cache core(gdb) frame 0↩ 
(gdb) while 1↩
 > info frame↩
 > up↩ 
 > end↩ 
(gdb) quit↩
# adb cache coreadb> $c↩
adb> $q↩
Send us the out­put from one of the two com­mand sets above de­pend­ing upon which de­bug­ger you have avail­able. If you have both, gdb (ac­tu­ally Wildebeest) is pre­ferred.
RedHat LinuxUse this com­mon san­ity test for all flavours of Linux.
# gdb cache core(gdb) frame 0↩
(gdb) while 1↩
 > info frame↩
 > up↩
 > end↩
(gdb) quit↩
SuSE Linux
Ubuntu LinuxSend us the out­put from the above com­mand when open­ing a prob­lem with the WRC. If you do not have gdb in­stalled on your sys­tem, just open a new prob­lem.
macOS (Darwin)
# lldb↩
(lldb) target create -c core↩
(lldb) thread backtrace all↩
(lldb) quit↩
# gdb cache core↩
(gdb) frame 0↩
(gdb) while 1↩
 > info frame↩
 > up↩
 > end↩
(gdb) quit↩
Send us the out­put from lldb (if from OS X 10.8 (Mountain Lion) or later), oth­er­wise send the out­put from gdb (for Mac OS X 10.7 (Lion) or ear­lier).
OpenVMS
$ ANALYZE/CRASH dumpfile.DMP↩
SDA> SHOW CALL_FRAME/ALL↩
If you are still run­ning OpenVMS v7.x (or ear­lier),
the pre­vi­ous com­mand will not work, in­stead use:SDA> SHOW CALL_FRAME↩
SDA> SHOW CALL_FRAME/NEXT↩
Repeat the prior com­mand un­til you get an er­ror.SDA> QUIT↩
$ ANALYZE/PROCESS dumpfile.DMP↩
DBG> SHOW CALL/IMAGE↩
DBG> QUIT↩
Send us the out­put from ei­ther SDA or the de­bug­ger, but the out­put from SDA is pre­ferred. If you only have a CACCVIO-pid.LOG file, check that it is not empty or al­most empty.
Solaris
# mdb cache core> ::stackregs↩
> ::quit↩
# dbx cache core.DMP↩
(dbx) where↩
(dbx) quit↩
For al­most all ap­pli­ca­tions, InterSystems prefers the dbx de­bug­ger on Solaris, but for a san­ity test, mdb is bet­ter. Send us the stack trace pro­duced by mdb or dbx (mdb pre­ferred) when you open a prob­lem re­port with the WRC.
WindowsCurrently there is no rec­om­mended san­ity check for Windows process dumps.

At­tach the de­tails of the san­ity test to your WRC case, or e-mail to: support@intersystems.com).

Transmission

Be pre­pared to send us the full core along with sup­port files that may be needed for your par­tic­u­lar op­er­at­ing sys­tem. We need to know the exact ver­sion of Caché, Ensemble, HealthShare, or InterSystems IRIS data plat­form gen­er­ated the core file. If you have re­linked the soft­ware to in­clude cus­tom $ZF() func­tions, please send the ex­e­cutable. (Actually, it is more con­ve­nient, if you al­ways send the ex­e­cutable.)

On most Unix sys­tems, it is also best to send the li­braries, used by the ex­e­cutable. The like­li­hood we will need li­braries for any given plat­form varies. Consult this ta­ble:

OSHardwareNeed LibrariesSupport Level
AIXPowerPCUnlikelyA
HP–UXPA–RISCVery LikelyC
HP–UXItaniumLikelyB
Linux (all flavours)x86LikelyB
Linux (all flavours)x86_64LikelyA
Linux (all flavours)ItaniumVery LikelyD
Linux (all flavours)arm64LikelyB
macOSPowerPCUnikelyD
macOSx86UnikelyC
macOSx86_64UnikelyA
macOSarm64UnikelyA
OpenVMSVAXn/aD
OpenVMSALPHA αXPn/aB
OpenVMSItaniumn/aB
Solarisx86_64Very LikelyB
SolarisSparcUnlikelyB
Tru64 UNIXAPLHA αXPVery UnlikelyD
Windowsx86n/aA
Windowsx86_64n/aA
WindowsItaniumn/aD

Explanation of Support lev­els

AAs of the post­ing of this doc­u­ment, InterSystems has the re­sources to diagnose core files on this plat­form.
BFull sup­port for this plat­form has re­cently lapsed. However, InterSystems still has the re­sources to diagnose core files on plat­form. Some di­ag­nosed prob­lems may not be cor­rected with an ad hoc build.
CLegacy sup­port. InterSystems may still have lim­ited re­sources to diagnose core files on this plat­form, how­ever, it may no longer be pos­si­ble to to pro­vide an ad hoc build to fix any de­fects found.
DNostalgia sup­port. InterSystems does not main­tain any re­sources to diagnose prob­lems on these plat­forms. However some lim­ited ca­pa­bil­ity sur­vives. Cores on these plat­forms might be analysed. There is no chance that any de­fects found can be fixed.

Issue an ldd com­mand to list the needed li­braries:

# ldd install_directory/bin/im­age        linux-vdso.so.1 => (0x00007fffd1320000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f23e5002000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f23e4dfa000)
        libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f23e4af0000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f23e47ee000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f23e45d8000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f23e4216000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f23e521a000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f23e3ffa000)

The above con­tains sam­ple out­put for Rhel 7. The out­put of all Unix sys­tems are sim­i­lar. install_directory refers to the di­rec­tory in which Caché, Ensemble, HealthShare, or InterSystems IRIS data plat­form is in­stalled. im­age is cache for all prod­ucts, prior to 2018, and irisdb for prod­ucts since 2019.

If you are send­ing mul­ti­ple files, it is best to place them in a com­pressed con­tainer file. In gen­eral .ZIP is best. .tar.gz is also rea­son­able. For OpenVMS cre­at­ing a backup file with

$ BACKUP *.* [-]saveset.BCK/SAVE/DATA=COMPRESS↩

It can be help­ful to in­clude a man­i­fest that ex­plains the files be­ing sent. Please pre­pare the man­i­fest as a plain text file.

If you will be send­ing the data elec­tron­i­cally, please do not en­crypt the file, use an en­crypted trans­mis­sion method in­stead.

You can send core files to InterSystems by any of these meth­ods:

MethodSecurityMax size
Direct up­load to WRC Application You must open a WRC in­ves­ti­ga­tion be­fore up­load­ing files, upon up­load­ing a file, you will have the op­tion of mark­ing the prob­lem for el­e­vated se­cu­rity. Once a prob­lem is marked for el­e­vated se­cu­rity, all ac­cess to files as­so­ci­ated to your in­ves­ti­ga­tion is re­stricted to staff ac­tu­ally work­ing on the in­ves­ti­ga­tion. In ad­di­tion to a 300 Mio size limit for at­tach­ments there is a 60 sec­ond time-limit, there­fore your max­i­mum up­load is re­duced if your ef­fec­tive band­width is less than about 42 Mbps.Secure or Elevated300 Mio and 60 sec­ond
E-mail In gen­eral e-mail should be avoided for all but sim­ple prob­lems that can be in­ves­ti­gated with­out any cus­tomer data. Example: You just in­stalled Caché on a new com­puter, and it fails upon startup wth a small core file. That file is rea­son­able to send via e-mail.Unsecure40 Mio
Our kite-works server You must re­quest a link for up­load­ing data for any given prob­lem. These links ex­pire in 30 day or less. This is the pre­ferred method for up­load­ing se­cure data. The ab­solute size limit is the amount of free space on our server. However, as this method is used by most of our cus­tomers, please ad­vise if the files you in­tend to up­load are greater than 4 Gio in size.Secure> 4 Gio
Our sftp server You must re­quest a di­rec­tory spe­cific to the in­ves­ti­ga­tion. A di­rec­tory will be cre­ated for the in­ves­ti­ga­tion. For el­e­vated se­cu­rity prob­lems we cre­ate a re­stricted ac­cess ma­chine (or vir­tual ma­chine) and en­able an au­to­mated process to move any up­loaded files to that ma­chine. The ab­solute size limit is the amount of free space on our server, please ad­vise if the files you in­tend to up­load are greater-than 100 Gio in size.Elevated> 100 Gio
Your ftp/sftp server You must own and fully con­trol any server from which you re­quest we down­load data. InterSystems will not down­load data from any third-party server. Third-party servers are con­sid­ered a se­cu­rity risk.Up to you?
SecurLink InterSystems can down­load files di­rectly from any ap­proved ma­chine on your net­work through our SecurLink re­mote con­trol fa­cil­ity. There is no ab­solute size limit. However if you are con­nected to the InterNet via a V.90 mo­dem it would take us a week to down­load a 3 Gio core file.Secure and Elevated?
Physical me­dia You can mail phys­i­cal me­dia to your lo­cal InterSystems of­fice. InterSystems can read the me­dia for and send the data to our Boston of­fice where most core analy­sis is per­formed. Most of­fices can deal with USB disks, and ISO 9660 op­ti­cal me­dia. Our Boston of­fice can deal with many tape for­mats. You should check with InterSystems first, be­fore send­ing any me­dia. If the me­dia is sent via registered (not cer­ti­fied) mail, the data can be con­sid­ered se­cure (pos­si­bly el­e­vated).Varies?

It is im­por­tant to re­mem­ber that some of the files we want are bi­nary files, while oth­ers are text. For some file trans­fer meth­ods (es­pe­cially be­tween un­like op­er­at­ing sys­tems), it is im­por­tant to spec­ify if the file is bi­nary or text to pre­vent the file from be­ing cor­rupted.

Index

a

ABRT ,
AppArmor
abrtd
abrt-cli
apport ,

b

BlackListedPaths

c

CACCVIO-pid.LOG,
cachefpid.dmp
cacheipid.dmp
cachempid.dmp
cache.cpf,
CACHE.DMP,
CentOs
CERRSAVE-pid.LOG
chcore
chdev
chkconfig
coreadm,
coredumpctl
core_addshmem_read
core_addshmem_write
CORE_NOSHM
CrashReporter
CSESSION.DMP
cstat

d

de­fault
DumpStyle,

e

/etc/environment
/etc/profile.d/some­thing.sh
/etc/security/limits
/etc/security/limits.conf, ,
/etc/sysctl.d

f

FULL,

g

GRUB_CMD_LINE_DEFAULT,
grub2,
grub2-mkconfig

i

INTERMEDIATE
irisfpid.dmp
irisipid.dmp
irismpid.dmp
irisstat
iris.cpf,

l

lsattr

m

maxdsiz_64bit
MINIMAL

n

NOCORE
NOFORK
NOFORKNOSHARE
NOHANDLER
NORMAL

o

OpenGPGCheck

p

pid.dmp
ProcessUnpackaged
/proc/self/coredump_filter, ,
/proc/sys/kernel/core_pattern, ,

r

rcapparmor
RPM

s

sen­si­tive in­for­ma­tion
smit
SYS$PROCDMP
SYS$PROTECTED_PROCDMP
systemd
$SYSTEM.Config.ModifyDumpStyle

u

ulimit -c, , ,
/usr/sbin/kctune

y

yast2

z

$ZUTIL(40,1,48)
$ZUTIL(40,2,165)
$ZUTIL(150,"DebugException")

Comments