Outliner Software Forum RSS Feed Forum Posts Feed

Subscribe by Email

CRIMP Defined




Progam with QDA Qualitative Data Analysis features? Coding/tagging blocks of text?

< Next Topic | Back to topic list | Previous Topic >

Pages:  < 1 2 3 4 5 6 7 8 >  Last ›

Posted by Carrot
Aug 4, 2012 at 02:48 PM


Dr Andus wrote:
>Carrot wrote:
>>mentioned you code your document by adding Headings (up to 5
>>I think this will
>>be the major problem- I need to use about 80 different
>codes to follow different themes
>>in my work.
>>Perhaps CT will not be suitable for
>this job, but I’ll purchase a copy and
>>learn how to use it for my writing and other
> >There are several other features in CT that can be used for coding, the
>Headings are just one of them. I also use the Categories, Attributes, and Properties.
>The last two allow for some complex inline and search queries for building reports. It
>probably also depends on your particular qualitative research philosophy and
>methodology as to how you would want to use these for your purposes to arrive at a
>desired outcome.
> >>For now, I
>>will continue to use AtlasTI and experiment with
>HyperResearch QDA.
> >I think it is a good idea to reallly learn and use one QDA
>extensively, and then once you’ve learnt the most useful features and got frustrated
>with the limitations, then trying to remodel your process in CT. I used NVivo for 6
>months daily and processed half of my data before I got fed up. Ironically what got to me
>that I ended up with 100 codes and it became difficult to say which were the important to
>focus on.
> >Sorry I can’t do the detailed instructions for you right now for CT. I’ll
>try to do them when I get back home in Sept. Good luck! 

Thanks for your reply!
Now I am even more curious about your method of using CT.
I’d much rather learn how to use an easily affordable program like CT and use it for my huge projects.
As for my “qualitative research philosophy”, I don’t really have one- just sort-of a grounded-theory
approach to code the data based on concepts that come out of the interview material. Once the data is coded,
then to see what codes overlap. I’m sure the process will teach me a great deal about themes in my data.

AtlasTI is powerful and well-designed and since I’ve already paid for it, it would behove me to learn it well.
Once I’ve done most of my work, later I will go back and figure out how to do it again in CT.
That might be a nice way of creating a website for my field research, and a way to link together the photos, video and the story
I want to tell.

The problem is now I find CT quite difficult to figure out.
Please do make that video when you have time in September. I think a lot of us would appreciate it greatly. I a lot of the people who use the CT forum would like that too.


Posted by Dr Andus
Aug 4, 2012 at 05:42 PM


Carrot wrote:
>As for
>my “qualitative research philosophy”, I don’t really have one- just sort-of a
>approach to code the data based on concepts that come out of the
>interview material. Once the data is coded,
>then to see what codes overlap. I’m sure
>the process will teach me a great deal about themes in my data.

That’s pretty much my approach (a grounds-up ethnographic approach).

I forgot to mention that the “Include” feature is another critical element of CT’s analytical capabilities. It allows you to extract bits of a document and include them in another document. So you can collect similarly themed (coded) sub-headings from several documents and construct a new page out of them. This is what I call abstraction by way of extraction.

As for getting started with CT, it’s been already discussed on another thread here. It’s the most difficult aspect of CT because it’s somewhat idiosyncratic, plus being a generalist wiki, it’s probably impossible to include instructions for all potential uses. It took me several years and several false starts before I realised what CT is and how I can use it. Interestingly though, after the initial breakthrough it was very quick to get up and running. Folks at the CT forum have been extremely helpful. One just needs to get stuck in and start doing things and start working through problems and learn it step by step.

I’m obviously a big fan, although I’ve only been using it since March. But it’s kind of saving my life because I was drowning under data and didn’t know how to pull everything together for my PhD project. I was expecting NVivo to do that for me but it wasn’t right.


Posted by Dr Andus
Aug 4, 2012 at 05:54 PM


Fredy wrote:
>ConnectedText, it’s a pleasure to see you’re considered here, lately,
>as the Swiss pocket knife to any text processing problem whatsoever; it’s
>heart-warming to see one developer flourish at least, even if it’s for the wrong
>reasons. At soon as you’ll be able to prepare my coffee and do my laundry, let me know,
>I’ll happily buy.

Fredy, I sense some irony in your words. Actually I only use CT for one thing: as the main tool for my PhD project, so I’m not claiming that it can solve every text processing problem in the world. Having said that, it seems to be a very versatile software. After all it is a wiki, so it depends on the user’s imagination as to what one makes of it and with it (kind of like the internet).


Posted by Fredy
Aug 7, 2012 at 03:17 PM


Doctorandus, glad you overcame your procrastination issue. ;-)

Alec, you’re right, there is no correlation between sw price and help / service / development issues at first sight since there are notable exceptions, and other factors that blur the picture. Then, I didn’t pretend there was, but I differentiated - not clearly enough I must say - between “corporate” sw (NoteMap, MindManager - sw bought by (often large) corporations or by legal entities in order to do business work - sometimes they ar not well served, as in the case of NoteMap, but they prefer buying third-rate sw from their usual source: here, LexisNexis, instead of buying much better sw from “less reliable sources”) and sw bought by individuals (may they be able to deduct it from their tax bill or not).

And I said, or wanted to say, these have become two different markets, with rather tiny intersection, in fact, less and less so. And within that second (!) market, most developers have ceased to do serious work, and I pretend that’s for complete disenchantment with individual buyers’ buying habits. (MS Office Student & Home = about 90 dollars for the whole family: 3x Word, 3x Excel, 3x their outliner nobody wants… and without Outlook, e.g., let alone Access - compare with 1x (!) Word alone, ca. 1990, ca. 900 DM = ca. 1200 euro today’s worth.)

In such an environment (of which the applics market is the most obscene deformity), individual developers ain’t able anymore to insist on a bit viable prices, and only a few succeed with replacing this “individuals” market by making an access to the “corporation” market, many of them for lack of quality, and almost all of them getting withheld by the above-described phenomenon of corporations preferring lesser sw by more renowned developers (hence the invariable “We’ll give it to the developers” in 1-man developers’ help forums).

So there are some niches left: crm, macros / text expanders, Outlook add-ons, and several others - editors were such a niche decades ago.

Btw, your MyInfo example is a perfect one for SIMILI-development, in order to please people that are not too demanding = developers pretending only to listen to their customers (and it’s a good example for another phenomenon I saw here: charging in dollars, being in East Europe, while corresponding U.S. “addresses” give up):

- hadn’t second pane; is with one for some 18 months now, not editable and not even dockable so that either you buy a second monitor in order to use it or go bonkers
- hadn’t good print / export facilities but pretended to have these within the help file only; the corresponding features couldn’t be checked for since they were greyed out in the trial; after buying, Petko said, oh yes, was planned (= rare example of a help file being ahead of actual development when it’s normally the other way round), will be done; Petko 6 months later: have no intention to do anything about these things; 2 years later: nothing been done
- had good special feature: possibility to display first 3 lines of hit items within the (search) hit table; was abolished without further notice; a “good” customer = one that even wrote a routine for importing askSam files into MI, finally asked what was going on; Petko: will be re-introduced later; feature is missing for 18 months now; say you run your little business on MI, say you rested on that fine feature (customer “Smith” calls; you search for that customer; you get 15 hits; the info within those first 3 lines will tell you which of those 15 hits to “open”); you try to revert to the (tremendously buggy) old version: I fear it’s even another format so you can’t revert without losing data?
- old version tremendously buggy, as said; many intermediate versions; all “beta” with timeout; new PAID version, so, after many versions to do away with innumerable bugs, perhaps with your big help (and certainly with your amounts of patience), you either pay, or you revert to your old, bug-ridden paid version: NO free intermediate “minor” updates then, contrary what you expected and were made to believe
- paid new version extremely bug-ridden; let’s hope this time people get free intermediate minor updates; I didn’t get any
- tags, yes; quick filter your items by just two tags, no
- attributes, yes; quick filter by more than one attribute at a time, no
- yes, there are some weird ways to do such a filtering, with LOTS of key pressings / mouse clicks, but it’s so devoid of practicability that your trying to use MI in a “productive” environment is technically possible, but it’s forcing you to do a lot of unnecessary work again and again, and it’s forcing your customers to wait for your doing so, on the phone, or in front of you; I call this unacceptable
- no Boolean search of course
- asked for a little pm system; have a look at what has been done
- I could bring many more examples, I’m just reviewing some nuisances from memory
- and yes, they introduced hoisting and amended clones creation = more or less, two years’ work - and for legal reasons, you need to call this “active development”.

Which is to say, our MI example is a perfect one in order to show WHY corporations refrain from buying sw from pop-n-mom affairs: The TOC simply will be much too high. Let alone our choice exmple, askSam, many brilliant ideas from ancient times, and so bug-ridden that anybody considering the use in any “productive” environment can’t sleep anymore for fear of total data corruption.

And yes, there are a few exceptions, as to most rules. Some individuals doing brilliant little sw for their own use, and for the benefit of others. Those are rare examples. Back to NoteMap: One-pane outliners ain’t viable, neither in a legal environment, nor anywhere else. No regrets, then. But askSam was finally bought / overtaken by its former marketing chief… instead of somebody who’d been able to make it corporation-ready: We all would have greatly profited by such a move. Or more clearly: With sw, ask yourself, are they able to rely upon an existing, large “corporate install base”. If not, let’em die without your commitment (data, time, problems, the money being the lightest part in all this).


Posted by Fredy
Aug 7, 2012 at 04:18 PM


Editors are much undervalued. Most people don’t realize that one of the beauties of editors is, you have data you can work upon in many different such tools, i.e. with editor 1 you do this, with editor 2 you do that - and you can automate / script almost everything, within those editors as in the concertation of such a workflow.

To whom it may concern, some hints:

- don’t be afraid of subroutines having to check for things or process 10k of lines, one after the other; a good editor will only take some seconds for millions of such computations…

- incl. checking checkboxes or such, i.e. you do (complete or partial) computations on / in a five-digit number of “lines” = records, all writing intermediate results into up-front “fields” of these lines, e.g. IF in line x ... and ... and ... (= could be 50 or more checks, comparisons, whatever) THEN fill up the “field” $e at the beginning of that line with “1”; ELSE fill it up with (or leave it at) “0”, so you would have a line starting with $e1;other field;other field, etc. or with $e0, etc., etc.

- the same works then for sorting / filtering, i.e. even if you have got “fields” within your “text” within these lines = records, you can SORT by these criteria, just check, and if yes, fill out a special “field” in front of your line; as said, the subroutine doing that special filling out (or a reset to zero there) would be perhaps 2 sec. for 10k of lines or even less on a modern pc

- you can do this with SEVERAL such “check fields” (or even with “de-doubled” data “fields”: Whenever a data “field” within a line becomes relevant for sorting, just de-double it in front of the line), and you know you can sort by several criteria, a, b, c: sort by c, then by b, then by a, and you’ll get a body sorted by a then b, then c; combine this with filtering and you’ll get only the relevant records (to check visually, to export, to print…), and in triply sorted order - remember for tenfold sorting, you can quickly write a routine to do it, then just fill in the criteria each time

- such de-doubling of data is one means to avoid having to cope with leading zeros, for example; if you’ve got 1- to 3 digit numbers, you’ll need subroutines putting zeros in front of 1- and 2-digit numbers, of course, if you want to sort by “fields” containing such numbers, or if you want to “block sort”, i.e. to sort by field contents AFTER such number “fields” - you avoid this by de-doubling such “fields” containing “regular” numbers (= without leading zeros) with special “fields” where you put such leading zeros where needed, but leaving the originals fields intact, OR you could use de-doubled number fields without leading zeros, but with the original numbers PLUS, say, 1,000: if you have numbers having 1- to 3-digits (1 to 999), add 1000 within the special field, so you’ll get invariably 4-digit numbers; the same goes for “original” data under the condition you’ll never forget which range the “real” original data was in, i.e. subtraction needed before publication…

- I spoke of statistical analysis when most editors just have rather basic number crunching. No problem, just remember, do everything as simply as possible WITHIN the records, i.e. put intermediate results there, and then, on these, “run” Excel or special sw (but don’t try to do text processing within Excel, and XL Notes’ a dead end - willing to put 10k of text paragraphs into 10k of XL Notes’ Word files within Excel cells? See what I mean when I so often declare things as “amateurish”?)

- It goes without saying that lots of text analysis can be half-automatted, i.e. if x near y and perhaps even z in th line then… and then, you’ll have that filtering in order to visually check (and undo your unwanted field “yes” or whatever settings by such rules): It’s far better to have 200 such “fields” set up automatically (or their content set to 1 from 0 or whatever), and have to check (in a filter table on the screen) and put off 10 such “fields” / settings manually then, than to code manually 190 of such records, out of perhaps 2000, in a pro like CT or whatever

- Even your Excel (or whatever your statistic tool is) results can then be re-introduced into your editor file, be it in front (append), or be it by replacing the values of the corresponding (let’s say first dozen or so of your) “fields” there, here again, a combination of “block processing” and the Excel export format can do wonders.

- For 80 or 180 codes, an editor is brilliant, especially so that whenever you see that you cannot preserve your code $e but need codes $ea, $et, etc., etc., there is “global replace”, when, as said, and that’s the thing non-editor users first must become aware of, 99 % of all processing will be done WITHIN single lines, but for 10k of lines 10k of times in a row - that’s what editors are for when doing data processing.

- Of course, for reading bits, you toggle to “word wrap” (buy a large screen anyway); the de-doubling of “fields”, btw, is a good way to “read” things = check visually for things on the screen, even for “field” contents where these original “fields” are further away “down into the length of the line”; you could even have some 3 or 4 “read fields” rather in front, into which at any given time you copy the contents of various, interesting fields upon which you wanna check.

- In my first post on this I meant, have your original data preserved in original form, i.e. if you have let’s say 50 conversation with 50 Mali women (or with whomever), do special lines ##mw01 or whatever, and then do a line for every paragraph in that conversation, in the form #mw01£001$code1$code1… up to #mw01£999$code1….. for (in this example) up to 1,000 lines for each conversation, etc., etc. i.e. have a “natural sort” of your original material you can anytime revert back to, and of course, and such editor can easily number those lines / paragraphs of each conversation or whatever then. So the very first such “fields” would be invariable, but as said, if you assure these start “fields” are of equal length, char-wise, you easily can sort on the followings fields

- Must I really add that you could use another special char whenever you must / want to combine several paragraphs in one line, and which will allow you for dissecting them automatically, afterwards? The same goes for text formatting (within the original, to be preserved, or for better presentation afterwards) - people doing stuff with CT don’t need the slightest hint from me how to do this, within an editor just the same.

- Etc, etc., there are lots of more possible hints. As soon as you got the strength of editors, many of commercial sw’s will become not only obsolete for you, but you will discard them for total inacceptability: An editor will do whatever you want it to do for you; commercial sw, most of the time, is “dumb crap” in comparison, or costs some Benjamins I’d be willing to spend… but then, I’d fear that quickly, I’d come to that wall beyond which the developers didn’t to their work as I want their product to be - whilst with an editor, I add a line a two to the script, and I’m done.

- Re KEdit: For a start with filtering, this program is pure gold; it’s just for more elaborate tasks I prefer better stuff.

- Re askSam: The beauty in that prog was that it was a “real application, ready for use”, and it had lots of editor-like features coming with such a “general public program”. Whilst en editor is an editor: beauty on the screen there is not.

But, Carrot, don’t you think that for purely esthetical reasons here, you, like Daly and some others, should avoid endless “citations” when they ain’t of any use?


Pages:  < 1 2 3 4 5 6 7 8 >  Last ›

Back to topic list