Wrote a utility (long back) for displaying resources from binary files, recently did some modifications as well. We know visual studio does provide a similar functionality but this one’s better with respect to viewing resources. You can view as many resource files as you like (haven’t put a limit). You can drag and drop folders or binaries into the application to get them loaded. Loads up any binary as long LoadLibrary succeeds. Quite useful for a quick view of resources in a binary file, will add editing of resources in the next version of this tool. Hope this helps.
Here is a screenshot of how the tool looks…
How to use ResourceDigger
Easy to use. Just drag and drop a folder or a bunch of executables. Or…
To load an executable press Ctrl + L or Load Exe toolbar button
To load a folder press Ctrl +D, or Scan Folder toolbar button. Select “Load Sub Directories” if you want to recurse into sub-directories.
What features are supported in ResourceDigger
Some of the features supported by the application…
Viewing animated gifs, normal gifs, pngs, jpegs, bmps, HTML files, manifest files.
Display group icons, cursors with detailed description of each of them… See screenshot…
Friendly display of accelerator table, a good way to know all the shortcuts provided by an application…
Friendly display of string table…
Displays resources in all available languages
Animated view of AVI file. With a toolbar to control frames in the AVI file.
Version display… (there are few issues, I’m working on them)
Clear view of registry resources…
Toolbar resource view…
Menu resource display…
Hex display of custom resources…
Hangs up if you give a folder with a humungous list of binaries.
This is not multithreaded so just be patient until resources finish loading.
Press ‘*’ on a particular node to expand all its child nodes.
These are compiler switches which tells the compiler where to dump debugging information collected from a C/C++ source file during compilation. Z7 tells the compiler to dump debugging information into a .obj file. Zi tells the compiler to dumps debugging information into an intermediate .pdb file.
What’s the Difference between Z7 and Zi?
This option produces a .obj file containing full symbolic debugging information for every C/C++ source compiled for use with the linker. The symbolic debugging information includes the names and types of variables, as well as functions and line numbers. .Obj files get bigger in size because of the debugging information dumped in by the compiler into this file and then bloats your disk. /Z7 is based on the old code view format. This option introduces additional burden on the linker to parse every .obj file for debugging information. These .obj files will then be collated into a one .pdb file which will normally be named after the executable file name during linking phase.
Advantages of using Z7 switch
Good thing here is that there is no contention to write to one file (as you’ll see below).
Every .cpp file will have its own debugging information which will eventually be collated by the linker.
Disadvantages of using Z7 switch
Downside being the time taken to link, size of files on disk and old format.
Minimal rebuild feature (/Gm) will not work if /Z7 is enabled. You’ll get following warning…
Command line warning D9007: ‘/Gm’ requires ‘/Zi or /ZI’; option ignored
The biggest disadvantage of Z7 is that this format doesn’t allow Edit and Continue, well this matters if you use this feature at all?
During Debugging what’s the effect of Z7 switch?
When debugging the debugger will tell us from where it has loaded a pdb file for a binary that its debugging. It loads the pdb file generated by the Linker which is unaffected by either /Zi or /Z7. Please see highlighted path of .pdb file from the debugger. So yes linker generates .pdb file which is the final .pdb file.
Please compare above size of .obj files with below output files generated when /Zi is enabled.
The compiler writes debugging information to one centralized file. The compiler names the program database named VCx0.pdb (or what you’ve configured it to be named), where x is the major version of Visual C++ in use.
Advantages of using Zi
When you use this option, your .obj files will be smaller, because debugging information is stored in the .pdb file rather than in .obj files.
Easy on the linker. It just has one file to parse to figure out debugging information for a binary that’s linking up.
Duplicate debugging information doesn’t make into the .pdb file generated by the compiler since its now working on one .pdb file instead of multiple .obj files where it doesn’t maintain a list of symbols generated to figure out duplicate ones.
Minimal Rebuild (/Gm) will work only work with /Zi or /ZI.
Advanced debugging features like Edit and Continue (/ZI) will work (making code changes when debugging, the changes are then built and we continue debugging again without stopping the debugging session). Sample effect on the debugger when a code change is done when debugging with Edit and Continue enabled…
——– Edit and Continue build started ——–
——————— Done ———————-
Disadvantages of using Zi
High contention to write to the one .pdb file as we’ve parallel builds running. Some machines will have several parallel builds configured.
With Zi enabled you’ll see following list of files generated. Take a note of sizes for the .obj files. Note that now we’ve got a
10/23/2014 06:00 PM 136 ConsoleApplication4.res
10/23/2014 06:00 PM 192 ConsoleApplication4.log
10/23/2014 06:00 PM 2,473 ConsoleApplication4.Build.CppClean.log
10/23/2014 06:00 PM 103,106 ConsoleApplication4.obj
10/23/2014 06:00 PM 933,490 stdafx.obj
10/23/2014 06:00 PM 1,551,360 vc120.idb
10/23/2014 06:00 PM 4,239,360 vc120.pdb <<<-- This is the pdb that the compiler generates, which contains debugging information from all the cpp files (path is generated using following pattern: $(IntDir)vc$(PlatformToolsetVersion).pdb). Now .obj files will not have debugging information. Compare their sizes with earlier output.
10/23/2014 06:00 PM 36,896,768 ConsoleApplication4.pch
When debugging the debugger tells exactly from where a pdb file is loaded. It loads the pdb file generated by the Linker which is unaffected by either /Zi or /Z7. Please see highlighted path of .pdb file from the debugger. So yes linker generates .pdb file which is the final .pdb file.
What’s the effect of these options when debugging in Visual Studio or a crash dump?
As far as debugging is concerned there is zero effect as the linker will eventually generate one final pdb file which is controlled by the linker switch: /DEBUG.
The compiler only generates an ‘intermediate’ .pdb file which contains debugging information collected during compilation which the linker will then eventually dump to a ‘final’ .pdb file. So all that you should be worried is the final .pdb file generated by the linker. This .pdb file is placed alongside the executable. This is the .pdb file that will be used when debugging the application or crash dumps.
So you might ask what if we disable .pdb generation in the compiler settings? Well then your code breakpoints will not hit. The breakpoints will be disabled since the linker couldn’t figure symbols for your code as the compiler didn’t generate any!
In order to unify these different CRTs, we have split the CRT into three pieces:
VCRuntime (vcruntime140.dll): This DLL contains all of the runtime functionality required for things like process startup and exception handling, and functionality that is coupled to the compiler for one reason or another. We mayneed to make breaking changes to this library in the future.
AppCRT (appcrt140.dll): This DLL contains all of the functionality that is usable on all platforms. This includes the heap, the math library, the stdio and locale libraries, most of the string manipulation functions, the time library, and a handful of other functions. We will maintain backwards compatibility for this part of the CRT.
DesktopCRT (desktopcrt140.dll): This DLL contains all of the functionality that is usable only by desktop apps. Notably, this includes the functions for working with multibyte strings, the exec and spawn process management functions, and the direct-to-console I/O functions. We will maintain backwards compatibility for this part of the CRT.
Side by Side implementation allows binaries co-exist side by side even with identical names. Well internally the binaries are placed into different folders based on type, name, version, processorArchitecture and publickeytoken. All these elements make up a unique folder/file name. For developers all they need is to embed a manifest into their application which I guess most of you would know. Mostly side by side dependencies are specified via #pragma comment statements or Visual Studio creates a manifest file which is then embedded into the binary using mt.exe. Visual Studio creates a manifest file in the intermediate output folder which follows the following naming convention.
How is a manifest embedded into an application?
If you build your application then you should see following line in the build output window…
1>Copyright (C) Microsoft Corporation. All rights reserved.
This file will be embedded into the binary as a resource of type RT_MANIFEST which is just an XML file. The OS application loader will pick up this file from the application’s resource section and will figure out application dependencies from the manifest entries.
Viewing Manifest file embedded into an executable file
What does Side by Side Solve?
The intention was to solve dll hell but this itself went on to become a bigger hell making bloggers like me to blog on this issue. Side by Side errors are hard to figure out hence there is a dedicated tool to help figure the errors out. Side by Side concept is cool but got screwed by the numerous ifs and buts that got into this technology.
How does a Side by Side error look like?
Side by side errors are troublesome to troubleshoot. You run an MFC/CRT application on customer machine and you run into error dialogs similar to the one shown below…
Don’t get overawed by the error. Its quite easy to troubleshoot, hmm well.
How to troubleshoot Side by Side errors using sxstrace?
As the error message suggests let use sxstrace.exe. The usage of sxstrace is pretty easy to understand…
WinSxs Tracing Utility.
Usage: SxsTrace [Options]
Trace -logfile:FileName [-nostop]
Enabling tracing for sxs.
Tracing log is saved to FileName.
If -nostop is specified, will not prompt to stop tracing.
Parse -logfile:FileName -outfile:ParsedFile [-filter:AppName]
Translate the raw trace file into a human readable format and save the re
sult to ParsedFile.
Use -filter option to filter the output.
Stop the trace if it is not stopped before.
Example: SxsTrace Trace -logfile:SxsTrace.etl
SxsTrace Parse -logfile:SxsTrace.etl -outfile:SxsTrace.txt
Collecting sxstrace logs
The command usage message shows us two sample commands and that’s exactly what we’re going to try. Please make sure you’re running an elevated command prompt…
Run the following command…
C:\>SxsTrace Trace -logfile:SxsTrace.etl
Tracing started. Trace will be saved to file SxsTrace.etl.
Press Enter to stop tracing...
So now you’re in tracing mode. Go ahead and run your application which threw the side by side error. Press enter on the command prompt window once you’re done repro’ing the error, this will stop the side by side tracing that’s going on. Once you press enter the ETL trace file will be dumped into the current folder. The dumped trace file is not in human readable format…
Binary output from SxsTrace tool
Parsing sxstrace logs
To make it readable, we’ll need to parse this file using sxstrace tool. Run following command to do that…
So now we have a text file as output. Lets open the file and find out what went wrong… Contents are as follows…
Parsed output from sxstrace
I’ve annotated the above screenshot for your convenience.
Sample location of a side by side assembly
So basically side by side works based on version of a dll. All side by side binaries go in the winsxs folder located in C:\Windows. For e.g. on my machine msvcr90d.dll is located in the following folder…
Viewing a side by side assembly
If you noticed, the folder name is made up of version number as well. So dll’s belonging to different versions are put in unique folders hence they exists “side by side” hence the name “side by side”.
So the above error means the application couldn’t find msvcr90d.dll in the above location. The way I would solve this is to create a setup project in VC9 and install the merge modules onto the target machine. Please note the dll’s are debug binaries else you could have just installed the redist’s.
I’ve got a native console application which would like to interop into a piece of managed code written in C#. This is the how the C# function “Sum” looks like…
My solution explorer looks as follows…
CSharpModule is a CSharp library while TestManagedCall is a native/unmanaged project. My requirement is as follows: call Class1.Sum from TestManagedCall project.
Adding Reference for Interop
To do this we’ll need to first add a reference of CSharpModule to TestManagedCall project. Go to project properties of TestManagedCall project and add a reference to CSharpModule project, see below screenshot…
So the reference of CSharpModule is now added to TestManagedCall project.
Project changes for enabling Interop
Next step is to add a new C++ file to TestManagedCall project. I’ll call the file: CSharpModuleWrapper.cpp, this class will act as a wrapper to our managed library: CSharpModule. This is how my solution will explorer look now…
Right click on CSharpModuleWrapper.cpp in the solution explorer, select properties, and enable CLR forjust this one file…
Click “Ok”. Now do a full rebuild. You should see following errors pop up, fix them one by one since adding CLR support results in these incompatibilities… (you’ll see these errors popup one by one, so fixing one will lead to another. Keep fixing them and you’ll see the next error).
cl : Command line error D8016: ‘/ZI’ and ‘/clr’ command-line options are incompatible. Open the file’s(CSharpModuleWrapper.cpp) properties and go to “All Options” under C/C++ node. This is a cool feature to quickly search for an option the properties dialog. Search for /ZI as given in the above error message. CLR compilation doesn’t support /ZI change it to /Zi.
cl : Command line error D8016: ‘/clr’ and ‘/Gm’ command-line options are incompatible. Again open file’s properties and goto “All Options” under C/C++ node. Search for /Gm as given in the above message… (disable minimal rebuild). Change to /Gm-
cl : Command line error D8016: ‘/clr’ and ‘/EHs’ command-line options are incompatible. In file’s properties search for /EHs. Switch to /EHa.
cl : Command line error D8016: ‘/clr’ and ‘/RTC1’ command-line options are incompatible. Change to Default as shown below…
Disable pre-compiled headers as shown below, search for /Yu under “C/C++->All options”…
With these changes your code will compile. I get following output…
Code Changes for Interop
Now add code to use the CSharpModule’s namespace and add a function to CSharpModuleWrapper class. Eventually this is how my code will look like with all the modifications…
The CSharpModuleWrapper.h has only one change, I added a declaration for Call_Sum().
Don’t forget to call Call_Sum(). This is how the calling code looks like…
This is a reliable way to make calls into managed world from native world. Of course there are #pragma’s (managed/unmanaged) that you can use but I’m not so confident about using them. This is clean!
The sample I’ve shown is a ‘very’ simple one, I’m sure you’ll have a variety of requirements, let me know if I can help.
You might notice that when comparing with NaN, the comparison gives wrong results at least for x64 builds. Take a look at the following piece of code. If you run this sample application the comparison statement if(lfv==0.0) returns true and the MessageBox is displayed. (We had a customer who reported this behavior.)
If you put a breakpoint at the if statement, this is what the debugger shows as values for lfv. lfv is definitely not zero.
So why is the comparison behaving weird. The reason for this issue is that you’ve disable precise comparison of floating point values via the compiler switch /fp:fast. To fix this you should change to /fp:precise. This can be done via project properties dialog as well…
So you might ask why does it behave well for x86 builds? The answer is that comparing NaN’s with /fp:fast result in undefined behavior. So it is OK for the comparisons to behave differently on different targets with this option.
You should be aware of all the implications of using /fp:fast in VC++ and one of them is undefined behavior for NaN comparisons. This is the way VC++ specifies how /fp:fast behaves and is likely due to implementation constraints. Other compilers may produce the correct result for such compares but that is because each compiler has its own specification of what /fp:fast means.
Or if you have too many functions you can enable /fp:precise per cpp file or per compilation unit. Right click on the cpp file, select properties and navigate to the following property and change it to /fp:precise
This should will help solve your comparison failures with NaN values.
Use io manipulator ‘unitbuf’ to turn off stream buffering and ‘nounitbuf’ to turn on stream buffering. If ‘unitbuf’ is on the stream object is flushed after every insertion else the stream is not force flushed. For e.g. endl triggers a flush in cases of ‘nounitbuf’.