May be as they worry the Moores law have saturated, May be the chips used to get smarter at a better rate.
The more fancy and much demanded work at many instances these days as I see is to scale up the business critical software applications. The resource rich legacy Mainframe applications which rarely worry about the performance also appears to be shifting focus towards this front.
The more fancy and much demanded work at many instances these days as I see is to scale up the business critical software applications. The resource rich legacy Mainframe applications which rarely worry about the performance also appears to be shifting focus towards this front.
Below is the list of usual tactics to attack the mainframe performance problems - Am going to try and keep this unsorted by any order list updated as when there is a new entrant I learn about.
JCL Techniques
1) The buf no which is signifies the buffers used to read is installation specific and is usually defined one or two - Override this to of more value like 20 to 50. [DCB=BUFNO=50 ] - Should be done with carefull because over allocating buffers is as bad as not allocating.
2) If feasible migrate away from tape - this doesnt improve considerably at many instances due to the advancements in the tapes storage managements - but I think it is worth verifying this dimension.
3) Shift the region size and time parameters from step level to the job level if possible.
4) Try allocating sortwork files when dealing with sort.
4) Try allocating sortwork files when dealing with sort.
5) Reasonable amount of secondary space to the data sets.
6) Scratch with IDCAMS rather than IEFBR14.
7) Replace IEBGENERS with DFsort/Syncsort to copy. IEBGENER is simply old timer and these sort utilities are optimized to the max level possible.
8) BLKSIZE=0
Cobol Techniques
1) Verify for too many open and close of the files. Open once and close as minimal as possible.
2) Initialize copybooks - This is a killer- Its safe not to initialize entire copybooks because it creates too many assembly statements. Try moving than initializing.
3) Compiler options - SSRANGE consumes lot - If possible override and use NOSSRANGE
4) Whenever possible try to exploit the utilities like DFsort/Easytrieve/SAS available for sorting/joining/Reformatting/reporting instead of homegrown cobol elements. They are tested and proven for performance.
5) Cobol Sorts ? Inefficient sorting method.
6) Binary search - Search all is preferred over search.
7) Make it like block contains 0 records on FD
8) Using NoDYNAM
9) Using Indexes over subscripts
10) Occurs depending is another thing that can be eliminated to miprove a bit
DB2 Techniques
1) Table scan usage prohibits the usage of the indexes, and also not using all the cols indexed in case of composite index would also not make use of the index - So Dummy or force use all the cols that are indexed. Make the query use index.
2) Many a times the SQL query in a element as such wouldn't consume more resources but the too many execution of that query would be a killer for performance and would create hot spots Try caching the results in a cobol table.
3) Avoid session tables - Instead go for the declared temporary tables.
4) Try and Reduce the level of queries
5) If possible try to take away the exist clause in the query into a separate query.
6) Multi row fetch
7) Use of optimize for n rows or fetch first
8) Of course - Introduce indexes on frequently used cols.
References
Indeed Last but not the least - Its good to verify the job scheduling timings and strategies such that when the job gets active it gets all it want and doesn't wait for something midway of their execution.
Of course not this list neither any list on this topic could be finite to list all of the options and there is always one more interesting way.