Meeting Minutes for 06/28/2023

Marmot Union Cataloging Committee
Wednesday, June 28, 2023
Minutes
 

Announcements

  • MLN1 Fixed display of call number subfield |e on Pika
    • This is the field where you would put your volume if your MARC record describes a single volume
    • Previously this was not displaying in Pika
    • This is |k on MLN2 and has been working there
  • Core Subject Analysis Committee meeting
    • The Library of Congress is experimenting with removing LCSH form subdivisions, that is $v
      • They would use 655 genre headings instead
      • The Library of Congress reminds everyone that they make LCSH, they pay for it, they do all the work and they are going to do what they want for their own reasons, and there's nothing you can do about it. 
      • The experiment will run for 6 months before they make a decision.
      • It is not a pilot, it is an experiment.
    • The World Jewish Council has complained to ALA about how holocaust denial books are located with serious books about the holocaust. ALA referred this to SAC, which is talking about a separate subject heading. Of course ALA doesn't do anything about call numbers.
  • New Marmot 001 duplicates reports.
    • These are the tableau reports Brandon created. Now there are separate reports for bib level holds and item level holds.
    • The new reports are at the same URL as always
  • Special Projects Team is working on the Marmot cataloging training plan.

 Discussion Topics

  • MLN2 Order record loading process
    • Currently Pika requires an RDATE and CDATE to acknowledge that an order is received
    • The CDATE can't be automated, so this set up prevents effective use of EDI invoicing
    • We use CDATE now so there's no gap between receiving and creating the item, this gap would leave the bib to stop showing up in Pika.
    • Can we use a dummy item to fill the gap?
    • After a lot of discussion we decide not to make any changes.
    • MLN2 will continue to require RDATE and CDATE in Pika.
  • MLN2 BCODE1 causing problems for API record loading
    • BCODE1 is currently functioning as audience code
    • The API assumes it is bib level, so when vendors send records with audience codes the API won't load them
    • We can't do API record loading without changing BCODE1 to the default of bib level
    • BTCat uses API loading, and Longmont wants to try BTCat
    • Lloyd is proposing to change the BCODE1 to the standard function of recording bib level information
    • The project would take these steps:
      • Copy audience information from BCODE1 to the 008 field audience
      • Change the record loaders so they will copy bib level from the leader into the BCODE1 field in the future
      • Change the labels and codes of BCODE1 to reflect bib level instead of audience
      • Copy bib level data from the leaders of existing records into BCODE1
    • This will not change the audience code in the 008 field which is what Pika looks at.
    • It is pointed out that staff still use Classic Catalog for selection.
      • You will still be able to use location limiters to find material in children's and young adult collections in Classic Catalog.
    • In BCODE1 there is code F LITERACY. This does not correspond to any codes in the standard MARC audience code. What is this code?
      • Years ago Boulder had an adult literacy collection. This probably corresponded to that collection.
      • There are only 129 things with the F code.
      • People think it is probably adult literacy.
      • In 008 audience code F is specialized audience, and adult literacy is a specialized audience so it would make sense to keep these as F in the 008 as well.
    • Another question is the G ADULT code in BCODE1. In 008 G is general, not adult. In 008 E is adult. Should this information be copied to G or E?
      • Adriana says the G ADULT in BCODE1 corresponds to E ADULT in 008, not G GENERAL.
      • So we will copy G from BCODE1 to E ADULT in 008.
    • Nobody objects to the project as long as audience information is retained in 008 which is where Pika looks for it.
    • Lloyd will start on the project so Longmont can use BTCat soon.
  • Boulder's copy of "If you give a mouse a cookie" was repeatedly put on a record for the braille version. Has anyone else had this happen? Can we figure out why it is happening?
    • We look at the two records in Sierra.
    • Does anyone actually have this title in the braille/text version?
      • Esther says they have a braille collection, but not this title at Broomfield.
    • Question: Shouldn't the braille version have a different ISBN?
    • Publishers are not good about using ISBNs correctly
    • Someone suggests that it could be a braille overlay, so they could actually be the original book that has been altered with a braille overlay
      • That would be a messy cataloging problem
    • Question: If the ISBN is in subfield z, will Pika still group them together
      • Pika grouping is based on Title and Author, not ISBN. So they will be in the same group, but Pika does treat braille as a separate format, so they will be in a separate format facet.
    • We can't figure out if anyone actually even has this braille version. We will have to follow up in email.
  • Change to MLN1 Cataloging Standards for 690 |5 for local genre headings
    • This is a minor change to MLN1 cataloging standards document
    • The proposed version is here: MLN1 Cataloging Standards 2023.06
    • Jamie points out that VuFind uses this code already.
    • We don't think Pika makes use of this code, but maybe it could in the future.
    • There are no objections to this change to cataloging standards
  • Follow up from discussion at council
    • If Marmot were to spend money on data clean up in Sierra what should we do?
      • Pay for development to improve the Marqui MarcEdit Plugin
      • Hire someone to work directly in Sierra on data clean up
      • Pay to get everyone on OCLC and OCLC records from vendors
    • This might require more than one meeting to discuss. For one thing we are running out of time today.
    • Lisa says maybe all of these are looking at the wrong thing. She would like to see if there is something that can be done in the load profiles to stop the problems with overlaying. OCLC bibs are not overlaying order record bibs. They insert a second record, so manual deduping has to be done, or holds stay on records with no items.
      • It is possible for a loader to match on the .o number instead of the ISBN. That way a record would overlay the bib with the order attached instead of an ISBN match.
      • The problem with this plan is that it could change the OCLC number. If the order is attached to an OCLC record and the new record is also OCLC.
      • We could add the overlay priority function to the load profiles that would overlay based on either order numbers or the Marqui loader. That would mean that records would only get overlaid by higher level records. Now there are a lot of loaders that only match and attach. the overlay priority function would allow them to overlay.
      • Overlay priority could still allow an OCLC number to be changed by an overlay. That would mess up other people's OCLC holdings.
    • Garfield says they are matching on .o numbers.
    • Another option for Marmot to spend money would be to pay to get everyone on the acquisitions module. So everyone could have order records, and use those order records as match points for bib overlays.
    • Mary Miller points out that they often can't get OCLC numbers right away. They have to load brief records so patrons can place holds even though they won't have OCLC numbers for weeks.
    • This discussion will be continued next Month.

FOLIO Updates

  • Duke University bails on FOLIO, they are migrating to Alma.
  • The National Library of Australia will go live before Sept.
  • Orchid was released on May 30.
  • The Poppy release was postponed until when Quincinallia was going to be released in November. There will only be two releases this year. I'm guessing this will become the standard.
  • ReShare is imploding
    • They are fighting over Direct Consortial Borrowing. Marmot needs it, but they aren't going to build it. We are quitting the organization. We will have to create a new organization and build a new fork of ReShare if we want to use it.
  • Consortia module.
    • The Consortia module is going to be in Poppy. Now postponed to November.
    • The module will manage controlled vocabulary, like status and statistical codes. The central app will be able to assign these settings to tenants or groups of tenants. I suggested templates of code settings.
    • You can't add and remove members from the consortia yet.
    • INN-Reach will not be compatible with the Consortia module any time soon.
  • Data Import
    • EBSCO has realized that Data Import has serious deficiencies. It's not clear how they plan to resolve them. They have replaced the Product Owner, and stopped the meetings for now. They are adding a second development team to work on it.
    • They are talking about setting up a system of chunking data, so that a large file would be broken up into chunks that the system could handle. They could also intersperse chunks from different jobs, so that they could appear to load simultaneously.
  • New app called Lists
    • EBSCO is creating an app called lists that "aims to provide actionable lists in FOLIO at the point of need." I'm not sure what this means, but it sounds like they are coping Create Lists from Sierra.
  • Browser
    • FOLIO is entirely web based. It has to run in a browser.
    • Chrome is the "reference browser" which means that's what they test on. It is supposed to work on other browsers, but they don't test it on other browsers, and they don't necessarily fix bugs unless they occur in Chrome too.
  • Metadata
    • They are developing bib record templates.
      • There will be MARC templates for bibs, authorities and holdings, not items.
    • Consortia display.
      • You will see one accordion with your local items and another one with items from other members.
  • Acquisitions
    • Stanford is developing a vendor management app that will fetch records from GOBI, Harrassozitz, and Coutts. Stanford works with all the vendors, so eventually this will work with all vendors that can support it.

Ongoing Action Items

Action

Responsible parties

Implement fix for local art in 856 fields in all load profiles.

Lloyd

Investigate Tableau tool for finding bad dedupes

Lloyd/Brandon

Create a new itype for a dummy item that will only allow local holds for the library that creates the record, and change load profile (J) to use the new dummy item.

Brandon/Lloyd

Fix load profile (J) to use new dummy item.

Lloyd

Fix the documentation for load profile (J)

Lloyd/Tammy

Create a flowchart to describe when to use which order record loader.

Lloyd/Tammy

Document ways to find music with no language in list 21 language problem list.

Lloyd

Develop cataloging training materials

Tammy/Lloyd

Develop flow chart for how to use the volume field

Lloyd

Investigate a new Tableau utility for finding bad volume field use

Lloyd/Brandon

Develop documentation for Marquis macro

Lloyd/Tammy

Next Duplicates Sub-committee meeting: 7/26

Next UCC meeting: 7/26

 
Meeting Date: 
Wednesday, 2023, June 28
Documentation Type: 
Meeting Minutes
Committees: 
Union Catalog Committee