20 20 20
Vicenç Torra
e-mail:
tot (ensaimada) natana (punt) cat

A mode de bloc:
On papers, venues and publication strategies (14/01/2021)

Selecting a good venue for a paper is very important. There are different factors to take into account, reputation of the conference or journal, research results and paper topic, timeliness (deadlines, time to decision, the time you need to have the paper "accepted"). What is preferable may also differ if you are at the beginning of your PhD or you are almost finishing (in the latter case, also on what are you are planning to do next: research/industry). In any case, it is very important to avoid predatory journals and predatory conferences. Some thoughts follow.

Computer science, in general, and AI/ML, in particular, tend to publish in conferences. There are top tier conferences, based on their reputation. Conference reputation comes from "historical" tradition, the program committees, reviewing process, etc. Of course, not all agree on the "rating" of some particular conferences, but there is a general agreement on most of them.

Top tier conferences usually have a low, or extremely low, acceptance rate. Less than 30%. E.g., ICML 2018 had 25% acceptance rate, and IJCAI - PRICAI 2020 had a 12.6% acceptance rate. Top tier conferences can give prestige to your cv. When a paper is rejected you may get good feedback useful for revising and improving it, and then, try another venue.

Then, of course, there are other conferences and workshops that can be of interest. Smaller and specialized conferences and workshops can provide you with good feedback, you may have interesting discussions with researchers in your own field, and can help you to build your social network in the area. I really value small conferences and workshops.

You can find in Internet several web pages listing CS conferences, some with only top tier CS conferences and some other with pages including rankings of conferences. See e.g. https://webdocs.cs.ualberta.ca/~zaiane/htmldocs/ConfRanking.html

In Australia, the CORE (Computing Research and Education) conference portal classifies computer science conferences in the following categories A*, A*, B, C, and "other". Among all conferences, 7% of them are classified as A*. They are the "top tier" ones in their classification. This is a well known ranking of conferences. Links here:

I list below my particular list of top tier conferences somehow related to our research. I classify them by topics. I also list some other no-top-tier conferences and workshops related to our research. This list is highly personal.

Within computer science, major publishers of conference proceedings are Springer, ACM, and IEEE. In addition to these publishers, CEUR-WS (http://ceur-ws.org/) publishes free open-access proceedings. With respect to conferences, it is more relevant the reputation of the conference than the publisher of the proceedings. Having said that, within Springer, I would prefer proceedings in LNCS (Lecture Notes in Computer Science) or LNAI (Lecture Notes in Artificial Intelligence) series than in other series. One reason is that most consider LNCS/LNAI as an acceptable publication even if they don't know the conference.

In other areas, including mathematics, statistics, and some fields in engineering researchers tend to publish in journals. Elsevier, Springer and IEEE are probably the main publishers of papers in our area. Not all journals are regarded as having the same quality or relevance in our field. Some consider the impact factor (JCR) as the main indicator of quality. I think that this is wrong. In all areas there are journals with good reputation, and these journals are not necessarily the ones with larger impact factors. As in statistics, correlation does not mean causality. It is not the high impact that makes the journal good. It is a good journal that can have a high impact.

Note also that when an index is used as a measure of quality, it becomes a kind of universal "goal" to maximize this indicator. This, of course, has secondary effects.

In addition to these considerations, others also apply. Countries have particular research policies, and researchers are influenced by these research policies. For example, research agencies of some countries value journal publications, and then CS researchers need to publish in journals to increase funding possibilities.

When a CV is considered, not only publication venues and impact factors are taken into account. Number of citations of the papers are also often considered. This, of course, assuming that the research done is relevant for the position (which is not always the case, of course) and of good quality.

CONFERENCES (and CORE ranking 2020 -- sorry, in case of error)
  • General AI conferences
    • IJCAI A*, ECAI A, AAAI A*
  • Machine learning conferences
    • ICML A*,KDD A*, PAKDD A, ECML (ECML-PKDD) A, ICDM A*, SDM A
  • Privacy and Security
    • ESORICS A, SP: IEEE Symp on Security and Privacy A*, Euro SP: IEEE European Symp on Security and Privacy (few years), TrustCom (new just old name) A
  • Agents
    • AAMAS A*
  • Uncertainty
    • UAI A*, IEEE Fuzzy A, IPMU C, FLINS B
  • Others
    • ICDE (Data engineering) A*, NeurIPs (neural nets) A*, VLDB (Databases) A*
  • Other related conferences
    • PSD (LNCS) C, DPM (LNCS) -, PETS B, PST (IEEE) C, SECRYPT B
  • We organize
    • MDAI B
    Other rankings: Norwegian list (used also in Sweden),Finish list (used also in Sweden), Italian list

Vicenç Torra


Last modified: 22 : 54; January 14, 2021.