Article
Stuart Salzer · Nov 8, 2016 29m read

What is a core file, and when are they useful?

What is a core file? and When are they use­ful?

The in­for­ma­tion in this doc­u­ment is cur­rent as ver­sions of In­ter­Sys­tems prod­ucts re­leased through 2019–06–30. This up­date cov­ers er­rors in that have been dis­cov­ered up to 2020–04–14, but not changes present in new ver­sions of In­ter­Sys­tems prod­ucts.

Nev­er­the­less, the de­tails for ex­ist­ing prod­ucts are not sub­ject to fre­quent change.

A .PDF version of this article is available from the WRC

Table of Contents

Core file basics SuSE Linux Windows
AIX Ubuntu Linux Testing
Docker macOS (Darwin) Sanity Test
HP–UX OpenVMS Transmission
RedHat Linux Solaris Index

Core file basics

Caché, En­sem­ble, HealthShare, and In­ter­Sys­tems IRIS data plat­form are very re­li­able. The vast ma­jor­ity of our cus­tomers never ex­pe­ri­ence any kind of fail­ure. How­ever, un­der rare con­di­tions, processes have failed, and in do­ing so have pro­duced a core file (called a process dump file on Win­dows and Open­VMS). The core file con­tains a de­tailed copy of the state of the process at the in­stant of its fail­ure, in­clud­ing the processes reg­is­ters, and mem­ory (in­clud­ing or ex­clud­ing shared mem­ory de­pend­ing upon con­fig­u­ra­tion de­tails).

The core file is, in essence, an in­stan­ta­neous pic­ture of a fail­ing process at the mo­ment it at­tempts to do some­thing very wrong. From this pic­ture, we can ex­trap­o­late back­ward in time to find the ini­tial mis­take that led to the fail­ure. As we look back in time, our pic­ture of the process be­comes fuzzier. With more de­tailed cores, we can look far­ther back in time be­fore the pic­ture be­comes too fuzzy.

With prop­erly col­lected core files and as­so­ci­ated in­for­ma­tion, we can of­ten solve, and oth­er­wise ex­tract valu­able in­for­ma­tion about the fail­ing process. With an ar­ti­fi­cially in­duced core file, usu­ally all we can say (of­ten af­ter hours of analy­sis) is “I see what hap­pened to this process, some­one ar­ti­fi­cially forced a core of the process.” An ar­ti­fi­cially in­duced core of a mis­be­hav­ing but the ex­tant process can be use­ful as a sec­ondary source of in­for­ma­tion to fill in de­tails of an analy­sis gath­ered from in­for­ma­tion not avail­able in the core.

In­ter­Sys­tems prod­ucts can be con­fig­ured to record full cores on any process fail­ure. This has no im­pact on per­for­mance on your day-to-day op­er­a­tion. All you need is to keep a sig­nifi­cant amount of disk space free for any po­ten­tial, al­beit un­likely, fail­ure. In­ter­Sys­tems has a good record of solv­ing prob­lems when a full core is avail­able. Some­times we dis­cover it was an ob­scure hard­ware fail­ure that is never go­ing to oc­cur again.

In­ter­Sys­tems prod­ucts can also be con­fig­ured to record lit­tle or no in­for­ma­tion for process fail­ures. While there is no per­for­mance ad­van­tage to dis­abling cores, you might find an op­er­a­tional ad­van­tage. Cores files can con­tain sen­si­tive in­for­ma­tion. If you don’t want to have a pol­icy for se­cur­ing core files, you can en­able core files only af­ter re­peated fail­ures.

Out of the box, In­ter­Sys­tems prod­ucts in­stall with an in­ter­me­di­ate ap­proach. That be­ing lim­ited size cores. With these small cores, In­ter­Sys­tems can nor­mally iden­tify a pre­vi­ously solved prob­lem, and maybe solve sim­ple prob­lems. We can’t solve all prob­lems with the de­fault lim­ited cores.

The pri­mary con­trol for de­ter­min­ing the size and type of core you will get is Dump­Style. This is a pa­ra­me­ter in your cache.cpf or iris.cpf file. There are sev­eral other Op­er­at­ing Sys­tem spe­cific con­trols.

Dump­Style is ex­plained here: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RCPF_Dumpstyle. Dump­Style takes an in­te­ger value from 0 to 8 that ap­plies to every process in a Caché, En­sem­ble, HealthShare, or In­ter­Sys­tems IRIS data plat­form in­stance, and de­fines what kind of core (or process dump) file is saved should a process en­counter a se­ri­ous er­ror. The de­fined val­ues are:

Code Name Platform Results
0 NORMAL Unix Produces full core (depending upon other settings).
OpenVMS Produces CACCVIO-pid.LOG (of limited value).
Windows Produces pid.dmp (of limited value).
1 FULL Unix Produces full core (depending upon other settings).
OpenVMS produces CACHE.DMP (possibly very large).
Windows Produces cachefpid.dmp (possibly very large).
2 DEBUG Unix Prior to Caché 2014.1, produced core with shared memory omitted, now deprecated. Best to use OS specific methods to omit shared memory.
OpenVMS Unimplemented.
Windows Reserved to InterSystems.
3 INTERMEDIATE Unix Unimplemented.
OpenVMS Unimplemented.
Windows Effective 2014.1, produces cacheipid.dmp.
4 MINIMAL Unix Unimplemented.
OpenVMS Unimplemented.
Windows Effective 2014.1, produces cachempid.dmp.
5 NOHANDLER Unix Do not register a signal handler. Leave all decisions about core creation up to the operating system.
OpenVMS Unimplemented.
Windows Unimplemented.
6 NOCORE Unix Do not generate a core file.
OpenVMS Unimplemented.
Windows Unimplemented.
7 NOFORK Unix Create a core dump (with shared memory), but do so from the original failing process, not a forked copy of the failing process.
OpenVMS Unimplemented.
Windows Unimplemented.
8 NOFORKNOSHARE Unix Create a core dump without shared memory, but do so from the original failing process, not a forked copy of the failing process.
OpenVMS Unimplemented.
Windows Unimplemented.
The default DumpStyle is 0 = NORMAL, except on Windows since Caché 2014.1, where it is 3 = INTERMEDIATE.

There are three ways to change the value of DumpStyle. They are:

Place this section in your cache.cpf or iris.cpf file, you will need to use your Operating System’s text editor for this:
[Debug]
dumpstyle=1

The number after the equals sign is the new default DumpStyle. Restart Caché, Ensemble, HealthShare, or InterSystems IRIS data platform. This is effective for all processes, and defines a new default for all processes if you don’t override with method ② or ③ below.
Issue the command:
SET old=$SYSTEM.Config.ModifyDumpStyle(1)
The number in parenthesis is the new value for DumpStyle. The old value is returned. This command is effective for all new processes created after it is run. Existing processes continue to run with their prior DumpStyle.
This command became effective with Caché 2014.1. For older versions, you can use this command:
VIEW $ZUTIL(40,2,165):-2:4:1
Where the new value for DumpStyle is the final digit.
Issue the command, or place it in your application:
VIEW $ZUTIL(40,1,48):-1:4:1
Where the new value for DumpStyle is the final digit. This is effective only for the process issuing the command, and overrides methods ① and ②.

An often asked question is how large my cores will be? The answer is the amount of [dirty] memory used by the process at the time of failure, plus a little more to describe that memory’s layout. Unfortunately, there are no simple formulæ to compute that size accurately. The best estimate depends upon whether or not you will be including shared memory.

Start with this:

size = base + heap + extra + gmheap + routine + d × global

Underlined portion only if shared memory is included

where base is the base amount of memory needed. Start with the size of the cache[.exe] or irisdb[.exe] image.
heap is the memory used by local variables your application creates. Estimate this by taking the difference of the system variable $STORAGE when you application starts and deep inside the most memory intense loop.
extra is for some features that require extra memory. There is no definitive list, but $SORTBEGIN() and MERGE are well known to use extra memory.
gmheap is from the [config] gmheap= section on your cache.cpf or isis.cpf file. This value appears in the configuration file in kio, so multiply by 1024. Skip this if you intend to exclude shared memory.
routine is the sum of all the values from the [config] routines= section of your cache.cpf or iris.cpf file. This value appears in the configuration file in Mio, so multiply by 1048576. Skip this if you intend to exclude shared memory.
d accounts for the need to describe memory used by global. This value will be somewhat greater than one. This actual value will vary among different versions, platforms, and the global buffer size you choose. For all 8 kio buffers on the InterSystems IRIS data platform on AIX, the value is about 1.05.
global is the sum of all the values from the [config] globals= section of your cache.cpf or isis.cpf file. This value appears in the configuration file in Mio, so multiply by 1048576. Skip this if you intend to exclude shared memory.

Note: As a practical matter, On most large production deployments, global is large enough that it dwarfs all other factors. To save core files with shared memory in a typical large production deployment, size = 1.25 × global is a reasonable estimate.

If you are concerned about the amount of disk space needed to store a core file, consider:

  • If you will be running on cloud service, and you don’t want to pay to keep a large amount of disk space reserved, you may have to accept limited core files. That is, core files without shared memory.
  • If you will be running on a server that you control but don’t want to reserve a large amount of disk space on your expensive disk array, you can purchase a cheap USB disk. You don’t need a fast or redundant disk for core files. You may want to attach a hasp staple onto the cheap USB disk with expoy and padlock it to something substantial.

Most op­er­at­ing sys­tems have con­trols to redi­rect cores to a com­mon di­rec­tory and con­trol the amount of in­for­ma­tion in cores. These also should be set, and you should con­sider the ram­i­fi­ca­tions for do­ing so, es­pe­cially from a data pri­vacy per­spec­tive. The fol­low­ing sec­tions cover the de­tails for in­di­vid­ual op­er­at­ing sys­tems.

Mov­ing cores to a com­mon di­rec­tory is very use­ful for ca­pac­ity plan­ning, but may also make the cores more ac­ces­si­ble to any­one wish­ing to ex­fil­trate data from your site.

Many types of prob­lems sim­ply can­not be solved with­out in­clud­ing shared mem­ory in the core. Cores that in­clude shared mem­ory tend to be much larger than cores that do not. Most of the differ­ence is the size of your global and rou­tine buffers.

If you are pro­cess­ing sen­si­tive in­for­ma­tion, a core file with­out shared mem­ory will only con­tain the sen­si­tive in­for­ma­tion be­ing processed by the one process that failed. A core file with shared mem­ory will also con­tain all the global vari­ables re­cently ac­cessed by every process. Re­cently might rep­re­sent min­utes, or con­sid­er­ably longer.­

AIX

Full (and mod­ern style) cores should be en­abled with smit:

System Environments
>  Change / Show Characteristics of Operating System
> >  Enable full CORE dump                               true
> >  Use pre-430 style CORE dump                         false

This can also be seen from the command line with:

# lsattr -E -l sys0 | egrep 'fullcore|pre430core'↩
fullcore           true             Enable full CORE dump              True
pre430core         false            Use pre-430 style CORE dump        True

And set with:

# chdev -l sys0 -a fullcore=true -a pre430core=false -P↩

The -P makes the change permanent.

By de­fault core files are writ­ten to the de­fault di­rec­tory of the process at the time of process fail­ure. Typ­i­cally that is the same di­rec­tory as one of your main CACHE.DAT or IRIS.DAT file. This can be changed with smit:

Problem Determination
> Change/Show/Reset Core File Copying Directory

or from the command line with:

# chcore -p on -l /cores -n on -d↩

Insure the file /etc/security/limits, has a section with the line:

default:
core = -1

Fi­nally, en­sure that by what­ever means you set up en­vi­ron­ment vari­ables for user processes each user has CORE_NOSHM de­fined or not de­fined as de­sired. If CORE_NOSHM=1 is de­fined, core files ex­clude shared mem­ory. If CORE_NOSHM=0 or not de­fined at all, core files in­clude shared mem­ory. The easy way to do this for all users is to edit /etc/en­vi­ron­ment to in­clude the line:

CORE_NOSHM=1

To as­sign what users have and do not have shared mem­ory cores sup­pressed on an in­di­vid­ual ba­sis, edit one of these files based upon the user and the shell they use:

CORE_NOSHM=1;export CORE_NOSHM    # sh in /etc/profile or $HOME/.profile
export CORE_NOSHM=1               # ksh in /etc/.kshrc or $HOME/.kshrc
export CORE_NOSHM=1               # bash in /etc/bashrc or ~/.bashrc
setenv CORE_NOSHM 1               # csh in ~/.cshrc

Docker

Core file cre­ation for an In­ter­Sys­tems IRIS data plat­form docker con­tainer is con­trolled by the host Linux sys­tem (with a few caveats). You must plan to send core files di­rectly to an op­er­at­ing sys­tem file. That file can be in­side the docker con­tainer, or to a di­rec­tory mapped onto the host Linux sys­tem. The ad­van­tage to send­ing the core file to a di­rec­tory mapped onto the host Linux sys­tem is that it will sur­vive a com­plete fail­ure of the con­tainer.

Since the core file must go to an op­er­at­ing sys­tem file, you must dis­able any ad­vanced core cap­tur­ing soft­ware on the host plat­form. You will want to set /proc/sys/ker­nel/core_pat­tern with an ap­pro­pri­ate value for both the host and con­tainer sys­tem. You should choose a rel­a­tively sim­ple di­rec­tory that you know will ex­ist on both the host and con­tainer (/tmp or /cores are the ob­vi­ous best choices). You may also want to in­clude vari­ables to in­sure that cores from mul­ti­ple docker con­tain­ers don’t over­write each other. Thus /cores/core.%p.%e is a good choice.

Host OS Disable link
RedHat Linux You must dis­able the Au­to­matic Bug Re­port­ing Tool (ABRT).
SuSE Linux Up through SuSE Linux En­ter­prise Server 11, SuSE did not have any ad­vanced core cap­tur­ing soft­ware. So all you need to do is set /proc/sys/ker­nel/core_pat­tern per in­struc­tions. How­ever, as yet we do not pro­vide in­struc­tions for dis­abling the ad­vanced core cap­tur­ing soft­ware in SuSE Linux En­ter­prise Server 12, so SuSE 12 and later ver­sions are cur­rently un­suit­able hosts for docker con­tain­ers.
Ubuntu Linux You must disable apport.

When you launch the con­tainer you may want to in­clude the op­tion to map the di­rec­tory you will be us­ing for cores to the host op­er­at­ing sys­tem. Thus:

# docker run ⋯ -v /cores:/cores ⋯ ↩

If you don’t in­clude -v /cores:/cores any core files cre­ated by a process fail­ure in­side the docker con­tainer, will sur­vive only as long as the docker con­tainer is run­ning. If the map­ping given by the -v op­tion is not sym­met­ri­cal, that is the value to the left and right of the colon are differ­ent, you may fail to cap­ture some cores.

Set the core­file size ulimit. Since this is a run­time de­ci­sion, add the fol­low­ing to the docker run com­mand:

# docker run ⋯ --ulimit core=-1 ⋯ ↩

HP–UX

En­able plac­ing cores in a com­mon di­rec­tory with ex­tended nam­ing with:

# coreadm -e global -g /cores/core.%p.%f↩

%p places the pid in the path­name, %f places the name of the ex­e­cutable (such as cache or iris) in the path­name. See:

% man 1m coreadm↩

for more op­tions.

Re­view if shared mem­ory has been en­abled in core files with:

# /usr/sbin/kctune core_addshmem_read↩
# /usr/sbin/kctune core_addshmem_write↩

Change with:

# /usr/sbin/kctune core_addshmem_read=1↩
# /usr/sbin/kctune core_addshmem_write=1↩

1 means en­able, 0 means dis­able. HP–UX di­vides shared mem­ory into two types. In gen­eral In­ter­Sys­tems only uses write shared mem­ory, but we rec­om­mend set­ting both types the same.

On HP–UX the core size is lim­ited by the maxd­siz_64bit ker­nel pa­ra­me­ter. Make sure that it is set high enough that a full core can be gen­er­ated.

Re­view with:

# /usr/sbin/kctune maxdsiz_64bit↩

Set with:

# /usr/sbin/kctune maxdsiz_64bit=4294967296↩

A user can fur­ther limit their core with a ulimit -c com­mand. This com­mand should be re­moved from /etc/pro­file, $HOME/.pro­file, and sim­i­lar files for other shells un­less it is your in­ten­tion to limit core files.­­

RedHat Linux

If you are run­ning Rhel 6.0 or later (also Cen­tOS), Red­Hat has added their Au­to­matic Bug Re­port­ing Tool (ABRT). As in­stalled this is not com­pat­i­ble with Caché, En­sem­ble, HealthShare, or In­ter­Sys­tems IRIS data plat­form. You need to de­cide if you wish to con­fig­ure ABRT to sup­port Caché, En­sem­ble, HealthShare, In­ter­Sys­tems IRIS data plat­form, or dis­able ABRT.

Be­low sec­tions la­beled ABRT ap­ply to use of ABRT,
while sec­tions la­beled AB/RT ap­ply to tra­di­tional use with­out ABRT.

ABRT To make In­ter­Sys­tems prod­ucts com­pat­i­ble with ABRT, de­ter­mine the ver­sion of ABRT you are run­ning:

# abrt-cli --version↩

Edit the ABRT con­fig­u­ra­tion file. The name varies de­pend­ing upon the ver­sion of ABRT:

ABRT 1.x: /etc/abrt/abrt.conf
ABRT 2.x: /etc/abrt/abrt-action-save-package-data.conf

If you in­stalled Caché, En­sem­ble, or HealthShare with a cin­stall com­mand (most com­mon), or In­ter­Sys­tems IRIS data plat­form with an irisin­stall com­mand, find the Proces­sUn­pack­aged= line, and change the value to yes.

ProcessUnpackaged = yes

Oth­er­wise, if you in­stalled Caché, En­sem­ble, HealthShare, or In­ter­Sys­tems IRIS data plat­form from an RPM mod­ule, find the OpenGPGCheck= line, and change the value to no.

OpenGPGCheck = no

Re­gard­less of how you in­stalled Caché, En­sem­ble, HealthShare, or In­ter­Sys­tems IRIS data plat­form, find the Black­List­ed­Paths= line, and add a ref­er­ence to cstat or iris­stat in the in­stal­la­tion/bin di­rec­tory. If the Black­List­ed­Paths= line does not ex­ist, add it at the end with just the cstat or iris­stat ref­er­ence.

BlackListedPaths=[retain_existing_list,]installation_directory/bin/cstat

Save your ed­its, and restart abrtd:

# service abrtd restart↩

Con­fig­ured as such, ABRT cre­ates a new di­rec­tory (un­der /var/spool/abrt or /var/tmp/abrt) for each process fail­ure, and in that di­rec­tory, place the core, and as­so­ci­ated in­for­ma­tion.

When a process fail­ure oc­curs, is­sue the com­mand:

# abrt-cli --list↩      # for ABRT 1.x
# abrt-cli list↩        # for ABRT 2.x­­

This will show a list of re­cent process fail­ures, and for each will give a di­rec­tory spec­i­fi­ca­tion. In each di­rec­tory will be a core­dump file, along with many other small files that col­lec­tively can be quite use­ful in de­ter­min­ing the cause of the process fail­ure.

% tar -cvzf wrcnumber-core.tar.gz /var/spool/abrt/directory/*↩

Where wr­c­num­ber is the num­ber In­ter­Sys­tems as­signs to in­ves­ti­gate your case. You can send us the com­pressed wrcnumber-core.tar.gz file.

AB/RT Al­ter­na­tively, you can dis­able ABRT with:

# service abrtd stop↩
# service abrt-ccpp stop↩      # ABRT 2.x only.

To per­ma­nently dis­able ABRT:

# chkconfig abrtd off↩
# chkconfig abrt-ccpp off↩      # ABRT 2.x only.

Fi­nally you need to up­date /proc/sys/ker­nel/core_pat­tern, see the next sec­tion.

AB/RT You can con­trol where cores are de­posited (un­less you are us­ing ABRT).

If you are us­ing ABRT, you must skip this step.
If you have dis­abled ABRT, you must per­form this step.
If you never had ABRT, this step is op­tional.

Edit the file /proc/sys/ker­nel/core_pat­tern

In the sim­ple case, just use:

core

It is gen­er­ally use­ful to add the pid, and name of the pro­gram gen­er­at­ing the core with:

core.%p.%e

You might also place the cores in a com­mon di­rec­tory with:

/cores/core.%p.%e

Ver­ify that all users have write ac­cess to di­rec­tory cho­sen. See man core for more op­tions. You should make this change per­ma­nent by cre­at­ing a file in the di­rec­tory /etc/sysctl.d with a name end­ing with .conf, and con­tain­ing:

kernel.core_pattern=/cores/core.%p.%e

ABRT AB/RT You should set the /proc/self/core­dump_fil­ter to con­trol the amount of mem­ory dumped to the core. This can be in an ap­pro­pri­ate /etc/pro­file.d/some­thing.sh file. The com­mand is:­­

# echo 0x33 >/proc/self/coredump_filter↩

The ex­act bitmap used de­pends upon the level of data you wish to col­lect. The mean­ings of the bits can be found in man core, sam­ples that make sense for In­ter­Sys­tems prod­ucts are:

Bit Description Need for InterSystems
0x01 Anony­mous pri­vate map­pings. Al­ways needed.
0x02 Anony­mous shared map­pings. Needed for com­plex prob­lems.
0x04 File-backed pri­vate map­pings. Maybe needed for prob­lems with $ZF().
0x08 File-backed shared map­pings. Maybe needed for prob­lems with $ZF().
0x10 Dump ELF head­ers. Al­ways needed.
0x20 Dump pri­vate huge pages. Not cur­rently used by In­ter­Sys­tems.
0x40 Dump shared huge pages. Not cur­rently used by In­ter­Sys­tems.
0x80 Dump pri­vate DAX pages (Rhel 8). Not cur­rently used by In­ter­Sys­tems.
0x100 Dump shared DAX pages (Rhel 8). Not cur­rently used by In­ter­Sys­tems.

As an al­ter­na­tive to plac­ing this in a shell spe­cific script, you can mod­ify this dur­ing boot. These in­struc­tions only ap­ply if you boot with grub2. You can test this with:

# grub2-install --version↩
grub2-install (GRUB) 2.02~beta2

Edit /etc/de­fault/grub. Change the line that be­gins GRUB_CMD­LINE_LINUX_DE­FAULT=. If the line doesn’t al­ready ex­ist in the file, just add it at the end. It should con­tain:

GRUB_CMDLINE_LINUX_DEFAULT="oldcmd coredump_filter=newval"

Note: The old­cmd is the old value of GRUB_CMD­LINE_LINUX_DE­FAULT (omit, if the line didn’t pre­vi­ously ex­ist). new­val is the new value for core­dump_fil­ter in hexa­dec­i­mal with a lead­ing “0x”.

Run:

# grub2-mkconfig -o /boot/grub2/grub.cfg↩

ABRT AB/RT You should set your ulimit -c for all processes to un­lim­ited. This can be set glob­ally in the file /etc/se­cu­rity/lim­its.conf. Add these two lines:

*               soft    core            unlimited
*               hard    core            unlimited

SuSE Linux

If your are run­ning SuSE Linux En­ter­prise Server 12 or later, SuSE now stores all cores in the sys­temd jour­nal. Core files stored in the sys­temd jour­nal are tran­si­tory. They do not sur­vive a sys­tem re­boot. Cores, if needed, must be ex­tracted from the sys­temd jour­nal be­fore any sys­tem re­boot.

To list the core files cur­rently in the sys­temd jour­nal:

# [systemd-]coredumpctl list↩

To ex­tract a core se­lected by the pid that cre­ated the core:

# [systemd-]coredumpctl -o core.morename dump pid

Note: the systemd- pre­fix was re­moved from the com­mand name effec­tive with SuSE 12-SP2.) It is rec­om­mended that you leave this sys­temd be­hav­iour in place, and not at­tempt to de­feat it.

If you are run­ning an older ver­sion of SuSE Linux En­ter­prise (11 or ear­lier), you can con­trol where cores are de­posited by edit­ing the file /proc/sys/ker­nel/core_pat­tern

In the sim­ple case, just use:

core

It is gen­er­ally use­ful to add the pid, and name of the pro­gram gen­er­at­ing the core with:

core.%p.%e

You might also place the cores in a com­mon di­rec­tory with:

/cores/core.%p.%e

Ver­ify that all users have write ac­cess to di­rec­tory cho­sen. See man core for more op­tions.

You can make this change per­ma­nent by ap­pend­ing these lines to the file /etc/sysctl.conf:

# Make this core pattern permanent (SuSE 12 breaks this, don't use):
kernel.core_pattern=/cores/core.%p.%e

You should set the /proc/self/core­dump_fil­ter to con­trol the amount of mem­ory dumped to the core. This can be in an ap­pro­pri­ate /etc/pro­file.d/some­thing.sh file. The com­mand is:

# echo 0x33 >/proc/self/coredump_filter↩

The ex­act bitmap used de­pends upon the level of data you wish to col­lect. The mean­ings of the bits can be found in man core, sam­ples that make sense for In­ter­Sys­tems prod­ucts are:

Bit Description Need for InterSystems
0x01 Anony­mous pri­vate map­pings. Al­ways needed.
0x02 Anony­mous shared map­pings. Needed for com­plex prob­lems.
0x04 File-backed pri­vate map­pings. Maybe needed for prob­lems with $ZF().
0x08 File-backed shared map­pings. Maybe needed for prob­lems with $ZF().
0x10 Dump ELF head­ers. Al­ways needed.
0x20 Dump pri­vate huge pages. Not cur­rently used by In­ter­Sys­tems.
0x40 Dump shared huge pages. Not cur­rently used by In­ter­Sys­tems.
0x80 Dump pri­vate DAX pages (SuSE 15). Not cur­rently used by In­ter­Sys­tems.
0x100 Dump shared DAX pages (SuSE 15). Not cur­rently used by In­ter­Sys­tems.

As an al­ter­na­tive to plac­ing this in a shell spe­cific script, you can mod­ify this dur­ing boot. To do this use yast2. The user in­ter­face for yast2 will vary de­pend­ing upon whether you are con­nected with a ter­mi­nal in­ter­face (it will use a curses in­ter­face), or a GUI in­ter­face. These in­struc­tions try to be in­ter­face ag­nos­tic.

Af­ter launch­ing yast2, se­lect Sys­tem → Boot Loader from the menu.
Se­lect the Ker­nel Pa­ra­me­ters tab.
Look for the Op­tional Ker­nel Com­mand Line Pa­ra­me­ter field.
If the field does not al­ready con­tain core­dump_fil­ter=0xvalue, ap­pended it to the field with a space sep­a­ra­tor. If it al­ready con­tains the as­sign­ment, sim­ply edit value.
Exit the menu sys­tem, and re­boot.

You should set your ulimit -c for all processes to un­lim­ited. This can be set glob­ally in the file /etc/se­cu­rity/lim­its.conf. Add these two lines:

*               soft    core            unlimited
*               hard    core            unlimited

Note: It may be nec­es­sary to dis­able Ap­pAr­mor, which blocks ap­pli­ca­tion be­hav­iour which it con­sid­ers un­usual, and writ­ing to a core file may be con­sid­ered un­usual.

# rcapparmor stop↩

Ubuntu Linux

Ubuntu uses ap­port to trap all process fail­ures, and for pack­ages added with its in­stal­la­tion pack­age, cre­ate ap­port re­ports which con­tain en­coded and com­pressed cores with ad­di­tional in­for­ma­tion. It is pos­si­ble ask ap­port to process un­pack­aged code, that is ap­pli­ca­tions not in­stalled with Ubuntu’s pack­age man­ager. Un­for­tu­nately, in do­ing so, Canon­i­cal treats the ap­port re­ports cre­ated for un­pack­aged code as some­thing it can ex­am­ine for the im­prove­ment of Ubuntu.

Since it is pos­si­ble to ex­tract your data from an ap­port re­port, you al­most cer­tainly do not want to en­able ap­port pro­cess­ing of un­pack­aged code. Your only choice is to dis­able ap­port. To do this, edit /etc/de­fault/ap­port, and edit the en­abled= line:

enabled=0

Cre­ate a file /etc/sysctl.d/30-core-pattern.conf (or any sim­i­lar name in that di­rec­tory). In that file place:

kernel.core_pattern=/cores/core.%p.%e

In­sure that the di­rec­tory you spec­ify for sav­ing cores is pub­licly writable, and has suffi­cient disk space. See man core for more op­tions.

You should set the /proc/self/core­dump_fil­ter to con­trol the amount of mem­ory dumped to the core. This can be in an ap­pro­pri­ate /etc/pro­file.d/some­thing.sh file. The com­mand is:

# echo 0x33 >/proc/self/coredump_filter↩

The ex­act bitmap used de­pends upon the level of data you wish to col­lect. The mean­ings of the bits can be found in man core, sam­ples that make sense for Caché are:

Bit Description Need for InterSystems
0x01 Anony­mous pri­vate map­pings. Al­ways needed.
0x02 Anony­mous shared map­pings. Needed for com­plex prob­lems.
0x04 File-backed pri­vate map­pings. Maybe needed for prob­lems with $ZF().
0x08 File-backed shared map­pings. Maybe needed for prob­lems with $ZF().
0x10 Dump ELF head­ers. Al­ways needed.
0x20 Dump pri­vate huge pages. Not cur­rently used by In­ter­Sys­tems.
0x40 Dump shared huge pages. Not cur­rently used by In­ter­Sys­tems.
0x80 Dump pri­vate DAX pages (16.04LTS). Not cur­rently used by In­ter­Sys­tems.
0x100 Dump shared DAX pages (16.04LTS). Not cur­rently used by In­ter­Sys­tems.

As an al­ter­na­tive to plac­ing this in a shell spe­cific script, you can mod­ify this dur­ing boot. These in­struc­tions only ap­ply if you boot with grub2. You can test this with:

 

# grub-install --version↩
grub-install (GRUB) 2.02-2ubuntu8.12­

Edit /etc/de­fault/grub. Change the line that be­gins GRUB_CMD­LINE_LINUX_DE­FAULT=. If the line doesn’t al­ready ex­ist in the file, just add it at the end. It should con­tain:

GRUB_CMDLINE_LINUX_DEFAULT="oldcmd coredump_filter=newval"

Note: The old­cmd is the old value of GRUB_CMD­LINE_LINUX_DE­FAULT (omit, if the line didn’t pre­vi­ously ex­ist). new­val is the new value for core­dump_fil­ter in hexa­dec­i­mal with a lead­ing “0x”.

Run:

# grub-mkconfig -o /boot/grub2/grub.cfg↩

You should set your ulimit -c for all processes to un­lim­ited. This can be set glob­ally in the file /etc/se­cu­rity/lim­its.conf. Add these two lines:

*               soft    core            unlimited
*               hard    core            unlimited

macOS (OS X, Darwin)

Mac OS X was re­named OS X, and it was later re­named ma­cOS. All these op­er­at­ing sys­tems are Ap­ple’s pro­pri­etary user in­ter­face lay­ered upon Dar­win, an op­er­at­ing sys­tem that Ap­ple de­rived from BSD Unix and the­o­ret­i­cally, re­leased to the pub­lic do­main. Nev­er­the­less, Ap­ple re­leases Dar­win in such a way, that as a prac­ti­cal mat­ter no one will ever run just Dar­win.

In­ter­Sys­tems prod­ucts only re­quire Dar­win, but since Dar­win isn’t prac­ti­cally avail­able, all in­struc­tions are based upon the full Ap­ple Mac OS X, OS X, or macOS.

ma­cOS in­cludes CrashRe­porter. A tool that au­to­mat­i­cally in­ter­cepts process fail­ures, pack­ages the fail­ure de­tails as text logs, and sends the data to Ap­ple for Analy­sis. CrashRe­porter will cap­ture process fail­ure de­tails for third-party soft­ware, such as Caché, En­sem­ble, HealthShare, and In­ter­Sys­tems IRIS data plat­form. Which, in the­ory, Ap­ple might for­ward to In­ter­Sys­tems.

In­ter­Sys­tems does not re­ceive CrashRe­porter logs form Ap­ple, nor have we de­vel­oped the abil­ity to an­a­lyze them. In­ter­Sys­tems works strictly form core files. For­tu­nately, CrashRe­porter works in­de­pen­dently from core file cre­ation. That is, it is pos­si­ble to to process a process fail­ure through nei­ther, ei­ther, or both CrashRe­porter, and core file cre­ation.

CrashRe­porter pref­er­ences can be set in Sys­tem Pref­er­ences → Se­cu­rity & Pri­vacy, Pri­vacy tab. The panel name and se­lec­tion of boxes varies from ver­sion to ver­sion. In Mac OS X 10.4, the panel was called just Se­cu­rity, and there were no rel­e­vant check boxes. In those older ver­sion the user was al­ways pre­sented with a di­a­log box on any process fail­ure, and asked if they wanted to send the data to Ap­ple for analy­sis.

De­pend­ing upon the sen­si­tiv­ity of the data you processes, you may want to untick all the op­tions re­lated to CrashReprter.

The method for en­abling cores in ma­cOS has un­der­gone sig­nifi­cant changes from ver­sion to ver­sion. See the fol­low­ing chart, and use the ap­pro­pri­ate method for your ver­sion.

Release CodeName InterSystems versions Method
Public Beta Kodiak unsupported Method 1:
Edit /hostconfig
Mac OS X 10.0 Cheetah unsupported
Mac OS X 10.1 Puma unsupported
Mac OS X 10.2 Jaguar unsupported
Mac OS X 10.3 Panther Caché (PowerPC) 5.0, 5.1
Mac OS X 10.4 Tiger Caché (PowerPC or x86 as marked) 5.0PowerPC, 5.1PowerPC, 5.2*, 2007.1*, 2008.1x86, 2008.2x86, 2009.1x86 Method 2:
Edit /etc/launchd.conf
Mac OS X 10.5 Leopard Caché (x86) 2008.1, 2008.2, 2009.1, 2010.1
Mac OS X 10.6 Snow Leopard Caché (x86–64) 2010.1, 2010.2, 2011.1, 2012.1, 2012.2
Mac OS X 10.7 Lion Caché (x86–64) 2011.1, 2012.1, 2012.2, 2013.1, 2014.1
OS X 10.8 Mountain Lion Caché (x86–64) 2012.2, 2013.1, 2014.1, 2015.1
OS X 10.9 Mavericks Caché (x86–64) 2013.1, 2014.1, 2015.1, 2015.2, 2016.1, 2016.2
OS X 10.10 Yosemite Caché (x86–64) 2014.1, 2015.1, 2015.2, 2016.1, 2016.2 Method 3:
Not automatic.
OS X 10.11 El Capitan Caché (x86–64) 2016.1, 2016.2, 2017.1DEV,2017.2DEV, 2018.1DEV
macOS 10.12 Sierra Caché (x86–64) 2017.1, 2017.2, 2018.1
macOS 10.13 High Sierra Caché (x86–64) 2018.1, IRIS 2018.1, 2019.1, 2019.2, 2019.3, 2019.4, 2020.1, 2020.2, 2020.3
macOS 10.14 Mojave IRIS 2019.1, 2019.2, 2019.3, 2019.4, 2020.1 2020.2, 2020.3
macOS 10.15 Catalina unreleased

Method 1: For ver­sions OS X 10.3 (Chee­tah), and prior un­sup­ported ver­sions: Edit the file /host­con­fig. Find the line CORE­DUMPS=, and change the value to -YES-.

COREDUMPS=-YES-

Method 2: For ver­sions OS X 10.4 (Tiger) to OS X 10.9 (Mav­er­icks), edit the file /etc/launchd.conf, and add the line:

limit core unlimited

And re­boot.

Method 3: For ver­sions OS X 10.10 (Yosemite) and newer, /etc/launchd.conf is elim­i­nated. Core file gen­er­a­tion is now half dis­abled. Ei­ther users must en­able cores for each process with:

% ulimit -c unlimited↩

Prior to run­ning their ap­pli­ca­tion, a priv­i­leged user must run:

# launchctl limit core unlimited↩­­­

Then lo­gout, and lo­gin again prior to start­ing Caché. Ap­ple specifi­cally does not pro­vide a good way to au­to­mate this, as they con­sider the de­fault gen­er­a­tion of a core file to be a po­ten­tial se­cu­rity vul­ner­a­bil­ity.

Ap­ple does pro­vide a way to to­tally dis­abling core file gen­er­a­tion. This is done by edit­ing the file /etc/sysctl.conf, and adding the line:

kern.coredump=0

It can be re-enabled, by re­mov­ing the line, or chang­ing the value to 1.­­

OpenVMS

By de­fault Caché, and En­sem­ble will only pro­duce CACCVIO-pid.LOG files for fail­ing processes. With these only rel­a­tively sim­ple prob­lems can be solved. These CACCVIO-pid.LOG files will al­ways be placed in the processes de­fault di­rec­tory (typ­i­cally the di­rec­tory of a CACHE.DAT file), and can only be redi­rected by chang­ing the processes de­fault di­rec­tory.

Caché, and En­sem­ble may also pro­duce CERRSAVE-pid.LOG files. These are sim­i­lar to CACCVIO-pid.LOG files. Usu­ally, you do not need to con­cern your­self with the differ­ence. In some cases Caché, and En­sem­ble will pro­duce both files in re­sponse to a fail­ure. In all cases seen so far, the CACCVIO-pid.LOG file is pro­duced first with the full con­text of the er­ror, while the CERRSAVE-pid.LOG file is pro­duced dur­ing fi­nal run­down of the process, and con­tains com­par­a­tively lit­tle in­for­ma­tion of value.

If ex­tended process dumps (FULL dumps) are en­abled, they too will be placed in the process de­fault di­rec­tory. How­ever they can be redi­rected, by defin­ing the log­i­cal name SYS$PROCDMP to point to a di­rec­tory in which to store the process dump. This log­i­cal name can be de­fined at the /SYS­TEM level. The file name will be CACHE.DMP or CSES­SION.DMP.

Open­VMS also pro­vides the log­i­cal name SYS$PRO­TECTED_PROCDMP. You should also de­fine that log­i­cal name with both /EX­EC­U­TIVE_MODE and /SYS­TEM. This ap­plies to process fail­ures of priv­i­leged im­ages, and parts of Caché are priv­i­leged. The Open­VMS doc­u­men­ta­tion will ad­vise you to de­fine the two log­i­cal names to differ­ent di­rec­to­ries, and place higher se­cu­rity on di­rec­tory cor­re­spond­ing to SYS$PRO­TECTED_PROCDMP. This is based upon the as­sump­tion that that the data processed by priv­i­leged im­ages is more sen­si­tive than that processed by non-privileged im­ages. If both are sen­si­tive, it is ok to point both log­i­cal names to the same di­rec­tory.

There is a his­tory of de­fects effect­ing the cre­ation of CACCVIO-pid.LOG and CERRSAVE-pid.LOG files as well as full process dumps. These are the most im­por­tant changes.

Change First version Description
JLC1809 Caché 2015.2 Prior to this change most CERRSAVE-pid.LOG files were use­less.
JO2422 Cache 2012.1 Prior to this change con­di­tions that would gen­er­ate a CERRSAVE-pid.LOG file al­ways cre­ated the lim­ited in­for­ma­tion file ig­nor­ing Dump­Style.
JLC1326 Caché 2011.1 Prior to this change reg­is­ters were not in­cluded in CACCVIO-pid.LOG, and CERRSAVE-pid.LOG files on the Ita­nium plat­form. This se­ri­ously ham­pered our abil­ity to solve all but sim­ple prob­lems with these files. We could still match with al­ready solved prob­lems.
JLC931 and JLC959 Cache 2007.2 Prior to these changes no use­ful in­for­ma­tion was recored in CACCVIO-pid.LOG, and CERRSAVE-pid.LOG files on the Ita­nium plat­form.
JO1968 Caché 5.2 Prior to this change con­di­tions that would gen­er­ate a CACCVIO-pid.LOG file al­ways cre­ated the lim­ited in­for­ma­tion file ig­nor­ing Dump­Style. ­­

Solaris

You can en­able plac­ing cores in a com­mon di­rec­tory with ex­tended nam­ing with:

# coreadm -e global -g /cores/core.%p.%f -G all↩
%p places the pid in the path­name.
%f places the name of the ex­e­cutable (such as cache) in the path­name.
The -G all in­cludes all types of mem­ory, that is a full core. Omit this for a de­fault core that still in­cludes most shared mem­ory. The fol­low­ing things can be stored in the core:
Code InterSystems usage In default
stack Needed yes
heap Needed yes
shm Not used yes
ism Not used yes
dism Caché shared memory yes
text Useful for $ZF() failures yes
data Needed yes
rodata Not used yes
anon Needed yes
shanon Generally small yes
ctf Needed yes
symntab Useful for $ZF() failures no
shfile Not used no

all in­cludes all types of mem­ory, de­fault in­cludes all but the last two. If you want sig­nifi­cantly smaller cores (to save space at the ex­pense of mak­ing fewer prob­lems solv­able), the most space is saved by re­mov­ing dism shared mem­ory. Do this with:

# coareadm -e Global -g /cores/core.%p.%f -G (default-dism)↩

See:

% main 1m coreadm↩

for more options.

By de­fault users have

% ulimit -c unlimited↩

You may use the ulimit (or limit com­mand in csh) to dis­able cores, but core­adm is gen­er­ally more flex­i­ble. So you should in­sure ulimit com­mands don’t ap­pear in /etc/pro­file or $HOME/.pro­file, or cor­re­spond­ing files for other shells.­­

Windows

The in­for­ma­tion to be in­cluded in a dump­file for Win­dows is fully con­trolled by the Dump­Style pa­ra­me­ter in the cache.cpf file (or other in­ter­face to chang­ing Dump­Style de­fined above.­­

Testing

Lo­cal se­cu­rity setup among other prob­lems can pre­vent a core from ac­tu­ally be­ing writ­ten. It can be very use­ful to test if a core will ac­tu­ally be cre­ated un­der real-world con­di­tions. To do that, en­ter the com­mand:

USER>DO $ZUTIL(150,"DebugException")↩

To be cer­tain, you should test this state­ment in­ter­ac­tively, in­side JOBs (as­sum­ing your ap­pli­ca­tion uses the JOB com­mand), and even hid­ing in­side an op­tion of your ap­pli­ca­tion that your users will not ac­ci­den­tally se­lect. Ver­ify that you get a core file, and fol­low the san­ity check in the next sec­tion to ver­ify that it is a good core file.­­

Sanity Test

Core files (and process dumps) can be quite large, and they can con­tain sen­si­tive in­for­ma­tion. Be­fore trans­mit­ting a core file to In­ter­Sys­tems for analy­sis, it is best to per­form a san­ity test of core file on the sys­tem that gen­er­ated it, or a very sim­i­lar sys­tem.

Based upon your op­er­at­ing sys­tem, please per­form the fol­low­ing san­ity test:

OS Sanity test
AIX # dbx cache core
(dbx) set $stack_details↩
(dbx) where↩
(dbx) quit↩
Send us the output from the above commands when opening a problem with the WRC. If you do not have dbx installed on your system, just open a new problem.
HP–UX # gdb cache core
(gdb) frame 0↩
(gdb) while 1↩
 > info frame↩
 > up↩
 > end↩
(gdb) quit↩
# adb core
adb> $c↩
adb> $q↩
Send us the output from one of the two command sets above depending upon which debugger you have available. If you have both, gdb (actually Wildebeest) is preferred.
RedHat Linux Use this common sanity test for all flavours of Linux. # gdb cache core
(gdb) frame 0↩
(gdb) while 1↩
 > info frame↩
 > up↩
 > end↩
(gdb) quit↩
SuSE Linux
Ubuntu Linux Send us the output from the above com- mand when opening a problem with the WRC. If you do not have gdb installed on your system, just open a new problem.
macOS (Darwin) # lldb↩
(lldb) target create -c core
(lldb) thread backtrace all↩
(lldb) quit↩
# gdb cache core
(gdb) frame 0↩
(gdb) while 1↩
 > info frame↩
 > up↩
 > end↩
(gdb) quit↩
Send us the output from lldb (if from OS X 10.8 (Mountain Lion) or later), otherwise send the output from gdb (for Mac OS X 10.7 (Lion) or earlier).
OpenVMS $ ANALYZE/CRASH dumpfile.DMP↩
SDA> SHOW CALL_FRAME/ALL↩
If you are still running OpenVMS v7.x (or earlier), the previous command will not work, instead use:
SDA> SHOW CALL_FRAME↩
SDA> SHOW CALL_FRAME/NEXT↩
Repeat the prior command until you get an error.
SDA> QUIT↩
$ ANALYZE/PROCESS dumpfile.DMP↩
DBG> SHOW CALL/IMAGE↩
DBG> QUIT↩
Send us the output from either SDA or the debugger, but the output from SDA is preferred. If you only have a CACCVIO-pid.LOG file, check that it is not empty or almost empty.
Solaris # mdb cache core
> ::stackregs↩
> ::quit↩
# dbx cache core
(dbx) where↩
(dbx) quit↩
For almost all applications, InterSystems prefers the dbx debugger on Solaris, but for a sanity test, mdb is better. Send us the stack trace produced by mdb or dbx (mdb preferred) when you open a problem report with the WRC.
Windows Currently there is no recommended sanity check for Windows process dumps.

Attach the details of the sanity test to your WRC case, or e-mail to: support@intersystems.com.

Transmission

Be pre­pared to send us the full core along with sup­port files that may be needed for your par­tic­u­lar op­er­at­ing sys­tem. We need to know the ex­act ver­sion of Caché, En­sem­ble, HealthShare, or In­ter­Sys­tems IRIS data plat­form gen­er­ated the core file. If you have re­linked the soft­ware to in­clude cus­tom $ZF() func­tions, please send the ex­e­cutable. (Ac­tu­ally, it is more con­ve­nient, if you al­ways send the ex­e­cutable.)

On most Unix sys­tems, it is also best to send the li­braries, used by the ex­e­cutable. The like­li­hood we will need li­braries for any given plat­form varies. Con­sult this ta­ble:

OS Hardware Need Libraries Support Level
AIX PowerPC Unlikely A
HP–UX PA–RISC Very Likely C
HP–UX Itanium Likely A
Linux (all flavours) x86 Likely A
Linux (all flavours) x86_64 Likely A
Linux (all flavours) Itanium Likely D
macOS PowerPC Unlikely D
macOS x86 Unlikely C
macOS x86_64 Unlikely A
OpenVMS VAX n/a D
OpenVMS ALPHA αxp n/a B
openVMS Itanium n/a B
Solaris x86_64 Very Likely B
Solaris Sparc Unlikely B
Tru64 UNIX ALPHA αxp Very Likely D
Windows x86 n/a A
Windows x86_64 n/a A
Windows Itanium n/a D

Ex­pla­na­tion of Sup­port lev­els

A
As of the post­ing of this doc­u­ment, In­ter­Sys­tems has the re­sources to di­ag­nose core files on this plat­form.
B
Full sup­port for this plat­form has re­cently lapsed. How­ever, In­ter­Sys­tems still has the re­sources to di­ag­nose core files on plat­form. Some di­ag­nosed prob­lems may not be cor­rected with an ad hoc build.­­­
C
Legacy sup­port. InterSystems may still have limited resources to diagnose core files on this platform, however, it may no longer be possible to to provide an ad hoc build to fix any defects found.
D
Nos­ta­lgia sup­port. In­ter­Sys­tems does not main­tain any re­sources to di­ag­nose prob­lems on these plat­forms. How­ever some lim­ited ca­pa­bil­ity sur­vives. Cores on these plat­forms might be analysed. There is no chance that any de­fects found can be fixed.

Is­sue an ldd com­mand to list the needed li­braries:

# ldd install_directory/bin/image
       linux-vdso.so.1 =>  (0x00007fffd1320000)
       libdl.so.2 => /lib64/libdl.so.2 (0x00007f23e5002000)
       librt.so.1 => /lib64/librt.so.1 (0x00007f23e4dfa000)
       libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f23e4af0000)
       libm.so.6 => /lib64/libm.so.6 (0x00007f23e47ee000)
       libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f23e45d8000)
       libc.so.6 => /lib64/libc.so.6 (0x00007f23e4216000)
       /lib64/ld-linux-x86-64.so.2 (0x00007f23e521a000)
       libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f23e3ffa000)

The above con­tains sam­ple out­put for Rhel 7. The out­put of all Unix sys­tems are sim­i­lar. in­stall_di­rec­tory refers to the di­rec­tory in which Caché, En­sem­ble, HealthShare, or In­ter­Sys­tems IRIS data plat­form is in­stalled. im­age is cache for all prod­ucts, prior to 2018, and irisdb for prod­ucts since 2019.

If you are send­ing mul­ti­ple files, it is best to place them in a com­pressed con­tainer file. In gen­eral .ZIP is best. .tar.gz is also rea­son­able. For Open­VMS cre­at­ing a backup file with

$ BACKUP *.* [-]saveset.BCK/SAVE/DATA=COMPRESS↩

It can be help­ful to in­clude a man­i­fest that ex­plains the files be­ing sent. Please pre­pare the man­i­fest as a plain text file.

If you will be send­ing the data elec­tron­i­cally, please do not en­crypt the file, use an en­crypted trans­mis­sion method in­stead.

You can send core files to In­ter­Sys­tems by any of these meth­ods:

Method Security Max size
Di­rect up­load to WRC Ap­pli­ca­tion You must open a WRC in­ves­ti­ga­tion be­fore up­load­ing files, upon up­load­ing a file, you will have the op­tion of mark­ing the prob­lem for el­e­vated se­cu­rity. Once a prob­lem is marked for el­e­vated se­cu­rity, all ac­cess to files as­so­ci­ated to your in­ves­ti­ga­tion is re­stricted to staff ac­tu­ally work­ing on the in­ves­ti­ga­tion. In addition to a 300 Mio size limit for attachments there is a 60 second time-limit, therefore your maximum upload is reduces if your effective bandwidth is less than about 42 Mbps. Secure or Elevated 300 Mio and 60 second
E-mail In gen­eral e-mail should be avoided for all but sim­ple prob­lems that can be in­ves­ti­gated with­out any cus­tomer data. Ex­am­ple: You just in­stalled Caché on a new com­puter, and it fails upon startup wth a small core file. That file is rea­son­able to send via e-mail. Un­se­cure 40 Mio
Our kite-works server You must re­quest a link for up­load­ing data for any given prob­lem. These links ex­pire in 30 day or less. This is the pre­ferred method for up­load­ing se­cure data. The ab­solute size limit is the amount of free space on our server. How­ever, as this method is used by most of our cus­tomers, please ad­vise if the files you in­tend to up­load are greater than 4 Gio in size. Se­cure > 4 Gio
Our sftp server You must re­quest a di­rec­tory spe­cific to the in­ves­ti­ga­tion. A di­rec­tory will be cre­ated for the in­ves­ti­ga­tion. For el­e­vated se­cu­rity prob­lems we cre­ate a re­stricted ac­cess ma­chine (or vir­tual ma­chine) and en­able an au­to­mated process to move any up­loaded files to that ma­chine. The ab­solute size limit is the amount of free space on our server, please ad­vise if the files you in­tend to up­load are greater-than 100 Gio in size. El­e­vated > 100 Gio
Your ftp/sftp server You must own and fully con­trol any server from which you re­quest we down­load data. In­ter­Sys­tems will not down­load data from any third-party server. Third-party servers are con­sid­ered a se­cu­rity risk. Up to you ?
Se­curLink In­ter­Sys­tems can down­load files di­rectly from any ap­proved ma­chine on your net­work through our Se­curLink re­mote con­trol fa­cil­ity. There is no ab­solute size limit. How­ever if you are con­nected to the In­ter­Net via a V.90 mo­dem it would take us a week to down­load a 3 Gio core file. Se­cure and El­e­vated ?
Phys­i­cal me­dia You can mail phys­i­cal me­dia to your lo­cal In­ter­Sys­tems office. In­ter­Sys­tems can read the me­dia for and send the data to our Cam­bridge office where most core analy­sis is per­formed. Most offices can deal with USB disks, and ISO 9660 op­ti­cal me­dia. Our Cam­bridge office can deal with many tape for­mats. You should check with In­ter­Sys­tems first, be­fore send­ing any me­dia. If the me­dia is sent via reg­is­tered (not cer­ti­fied) mail, the data can be con­sid­ered se­cure (pos­si­bly el­e­vated). Varies ?

It is im­por­tant to re­mem­ber that some of the files we want are bi­nary files, while oth­ers are text. For some file trans­fer meth­ods (es­pe­cially be­tween un­like op­er­at­ing sys­tems), it is im­por­tant to spec­ify if the file is bi­nary or text to pre­vent the file from be­ing cor­rupted.­­

Index

a

ABRT ,
AppArmor
abrtd
abrt-cli
apport

b

BlackListedPaths

c

CACCVIO-pid.LOG ,
CACHE.DMP,
CERRSAVE-pid.LOG
CentOS
CORE_NOSHM
CrashReporter
CSESSION.DMP
cachefpid.dmp
pid.dmp
cachempid.dmp
cache.cpf ,
chcore
chdev
chkconfig
coreadm ,
coredumpctl
core_addshmem_read
core_addshmem_write
cstat

d

DumpStyle ,
default

e

/etc/environment
/etc/profile.d/something.sh , ,
/etc/security/limits
/etc/security/limits.conf , ,
/etc/sysctl.d

f

FULL

g

GRUB_CMDLINE_LINUX_DEFAULT ,
grub2 ,
grub2-mkconfig

i

INTERMEDIATE
irisstat
iris.cpf ,

l

lsattr

m

MINIMAL
maxdsiz_64bit

n

NOCORE
NOFORK
NOFORKNOSHARE
NOHANDLER
NORMAL

o

OpenGPGCheck

p

ProcessUnpackaged
pid.dmp
/proc/self/coredump_filter, ,
/proc/sys/kernel/core_pattern , ,

r

RPM
rcapparmor

s

SYS$PROCDMP
SYS$PROTECTED_PROCDMP
$SYSTEM.Config.ModifyDumpStyle
sensitive information
smit
systemd

u

ulimit -c , , ,
/usr/sbin/kctune

y

yast2

z

$ZUTIL(40,1,48)
$ZUTIL(40,2,165)
$ZUTIL(150,"DebugException")
70
4 2 1 17,707
Log in or sign up to continue

Replies