DegraPro: A Master Degree Product Design Case

Building a Bioinformatics Platform to Accelerate Proteomic Analysis with Emphasis on Degradomics

challenge.

The process of proteomic analysis, with an emphasis on degradomics, involves multiple post-processing steps that are time-consuming (8-18 hours, up to 1-3 days in complex scenarios) and demand a high level of technical expertise. The workflow was fragmented across multiple tools, scripts, and computational environments, leading to high time investment, low standardization, susceptibility to errors, poor reproducibility, and technical barriers requiring proficiency in programming languages (Python, R) and bioinformatics tools.

how we solved.

DegraPro is a web-based bioinformatics platform that centralizes and automates post-processing steps through a user-friendly graphical interface. The platform allows direct input of peptide sequences and retrieval of structural and functional results, with both web-based visualization and export to structured formats. Key features include automated peptide sequence cleaning and formatting, integration with UniProt biological database via API, web-based visualization of results, export to structured formats (CSV, Excel), user account management and process tracking, and no installation required, fully web-based.

results.

Reduced analysis time from 8-18 hours (manual process) to automated processing in minutes. Improved standardization and reproducibility, reduced technical barriers for researchers, and enabled researchers with theoretical knowledge but limited practical bioinformatics experience to conduct analyses independently.

activities.

User Research

Process Documentation

User Journey

Strategic Design

Wireframe

Usability tests

Visual Design

Information Architecture

Design Tokens

Scientific Methodology

Documentation

Design Science Research

context.
The University

This project was developed at the Federal University of São Paulo (UNIFESP), within the Postgraduate Program in Innovation and Technology (Master’s Degree). The program focuses on applied research, combining scientific rigor with the development of practical technological solutions to address real-world problems.

My Role

I worked as a researcher and master’s degree candidate, conducting applied research at the intersection of design, technology, and bioinformatics. My responsibilities included problem investigation, methodological definition, research planning, artifact design and development, evaluation of results, and scientific communication, following an academic research framework.

Duration

The project was conducted over a period of two years, from 2024 to 2026, encompassing the full master’s research cycle, from problem identification and literature review to solution development, evaluation, and final dissertation.

Rationale

Proteomics and degradomics play a critical role in understanding biological processes related to health and disease. By studying protein expression, modification, and degradation, researchers can gain insights into disease mechanisms, biomarker discovery, and drug development. Advancing tools and methodologies in this field contributes directly to faster scientific discoveries, improved diagnostics, and the development of more effective therapies, ultimately generating significant impact for society and public health.

methodology.
Design Science Research Methodology (DSRM)

The project adopted the Design Science Research Methodology (DSRM), a framework designed for research focused on building, evaluating, and communicating innovative technological artifacts that solve real-world problems.

  1. Problem Identification and Motivation: Understanding the fragmented manual process

  2. Solution Objectives Definition: Translating needs into clear, verifiable goals

  3. Design and Development: Building the DegraPro platform

  4. Demonstration: Making the platform publicly available

  5. Evaluation: Measuring usage, performance, and user satisfaction

  6. Communication: Disseminating results to the scientific community

This approach ensured scientific rigor while maintaining a practical focus, balancing a theoretical foundation with real-world problem-solving.

discovery.
Problem Understanding
and Requirements Gathering

The discovery process was distributed in the following steps:

  1. Collaborative Mapping: Worked directly with students and researchers from the Biotechnology Graduate Program at Universidade Federal de São Paulo

  2. Proteomic Process Documentation: Detailed step-by-step mapping of the manual analytical workflow

  3. Tool and Format Analysis: Examined real examples of input and output files used in daily routines

  4. Bottleneck Identification: Identified critical steps, repetitive activities, and high-dependency manual interventions

The discovery process was distributed in the following steps:

  1. Collaborative Mapping: Worked directly with students and researchers from the Biotechnology Graduate Program at Universidade Federal de São Paulo

  2. Proteomic Process Documentation: Detailed step-by-step mapping of the manual analytical workflow

  3. Tool and Format Analysis: Examined real examples of input and output files used in daily routines

  4. Bottleneck Identification: Identified critical steps, repetitive activities, and high-dependency manual interventions

The discovery process was distributed in the following steps:

  1. Collaborative Mapping: Worked directly with students and researchers from the Biotechnology Graduate Program at Universidade Federal de São Paulo

  2. Proteomic Process Documentation: Detailed step-by-step mapping of the manual analytical workflow

  3. Tool and Format Analysis: Examined real examples of input and output files used in daily routines

  4. Bottleneck Identification: Identified critical steps, repetitive activities, and high-dependency manual interventions

Challenge Identified:
Fragmented and Time-Consuming Workflows

The process of proteomic analysis, with an emphasis on degradomics, involves multiple post-processing steps, including peptide sequence cleaning and comparison with biological databases. When performed manually or semi-

automatically, these activities are time-consuming (8-18 hours, up to 1-3 days in complex scenarios) and demand a high level of technical expertise. The workflow is fragmented across multiple tools, scripts, and computational environments, leading to:


  • High time investment: Manual data cleaning, formatting, and integration steps

  • Low standardization: Inconsistent processes across different researchers

  • Error susceptibility: Manual interventions increase the risk of mistakes

  • Poor reproducibility: Lack of integration between steps compromises traceability

  • Technical barriers: Requires proficiency in programming languages (Python, R) and bioinformatics tools

The Real-World Impact

Think of it like a research lab where each scientist uses different methods, tools, and formats to analyze the same type of data. Results become inconsistent, time-consuming, and difficult to reproduce or compare. The same was happening in degradomics research, where valuable research time was being spent on repetitive, error-prone manual tasks instead of scientific interpretation.

proposed solution.
DegraPro Platform

DegraPro is a web-based bioinformatics platform that centralizes and automates post-processing steps through a user-friendly graphical interface. The platform allows direct input of peptide sequences and retrieval of structural and functional results, with both web-based visualization and export to structured formats widely used by the scientific community.

Key Features:
  • Automated peptide sequence cleaning and formatting

  • Integration with UniProt biological database via API

  • Web-based visualization of results

  • Export to structured formats (CSV, Excel)

  • User account management and process tracking

  • No installation required, fully web-based

The platform eliminates the need for manual database queries, automatically enriching peptide data with high-value biological information, including protein identification, functional annotations, domain structure information, and bibliographic references.

The Opportunity

The scientific community needed a centralized platform that would:

  • Automate repetitive post-processing steps

  • Integrate seamlessly with biological databases (UniProt)

  • Provide an intuitive interface accessible to researchers with varying technical expertise

  • Ensure standardization and reproducibility

  • Accelerate the analytical process

  • Reduce operational barriers for the scientific community

design.

Information Architecture
and High-Fidelity Prototype

In this phase of the project, the main goal is to create a tangible design where we can show the users and get feedback. Until now, we have just been researching and defining, but at this point, we start showing.

Information Architecture
and Key Design Decisions
  • Organized content and functionality for intuitive navigation

  • Simplified Interface: Focus on core functionality to reduce cognitive load

  • Progressive Disclosure: Show essential information first, details on demand

  • Clear Visual Hierarchy: Emphasize primary actions (submit sequences, view results)

  • Contextual Help: Provide guidance where needed without overwhelming users

High-Fidelity Prototyping

Created detailed prototypes for key screens:

Usability Testing and Iteration

To verify whether the proposed solution met the user's needs. We conducted a small user test with the high-fidelity prototype.

User test activities
  • Prototype Testing: Conducted usability tests with target users (bioinformatics researchers)

  • Feedback Integration: Incorporated user feedback into design iterations

  • Accessibility Considerations: Ensured the platform is accessible across different devices and browsers

Key improvements
  • Enhanced onboarding flow based on user feedback

  • Improved result visualization for better data interpretation

  • Streamlined sequence input process

development.

Development, Integrations,
and Performance

In this phase of the project, the main goal is to create a tangible design where we can show the users and get feedback. Until now, we have just been researching and defining, but at this point, we start showing.

Development and Technical Architecture

DegraPro was built using Ruby on Rails for rapid development through convention over configuration, maintainable code following DRY principles, and MVC architecture for clear separation of concerns. PostgreSQL was selected for its robustness, scalability for large data volumes, advanced concurrency control (MVCC), and transactional integrity essential for scientific data. The platform features MVC architecture with clear separation between data models, business logic, and presentation; RESTful API integration for seamless UniProt connectivity; WebSocket-based asynchronous processing for real-time updates on long-running tasks; relational database design ensuring data integrity; and responsive design supporting both desktop (71.4%) and mobile (28.6%) devices.

UniProt API Integration

The integration with UniProt's biological database enriches peptide analysis with comprehensive protein information, eliminating the need for manual database queries. The solution implements asynchronous search by submitting peptides to UniProt's async REST endpoint, job tracking to monitor processing status using job IDs, WebSocket connections for real-time status updates, comprehensive data enrichment retrieving protein function, expression, domain structure, and bibliographic references, and robust retry mechanisms for handling external service dependencies. The technical implementation uses POST requests to initiate searches at https://peptidesearch.uniprot.org/asyncrest, GET requests to check job status, and additional GET requests to retrieve detailed protein information from UniProt's REST API, automatically enriching peptide data with high-value biological information.

Performance Testing and Optimization

Load testing with k6 involved progressive load testing reaching 100 concurrent virtual users, generating 31,968 HTTP requests at an average rate of 19.5 requests per second. Results demonstrated excellent performance with a median response time of 133ms and 90th percentile of 145.6ms, well below the 2-second acceptance threshold. The platform architecture remains stable under high load, with response times staying low and predictable for accepted requests. The main bottleneck identified was external service (UniProt) rate limiting rather than platform architecture issues, with a 39% failure rate attributed to API rate limits. The architecture is suitable for exploratory and initial screening analyses, with a future scalability path identified through local database mirroring for high-throughput scenarios.

validation.

Testing Degrapro Live
With Real Users

In this phase of the project, the main goal is to create a tangible design where we can show the users and get feedback. Until now, we have just been researching and defining, but at this point, we start showing.

User Adoption and Engagement

From 87 anonymous users who viewed the landing page, 25 accessed the registration page (79 page views), resulting in 16 completed registrations (18.4% conversion rate) and 11 active users (68.75% activation rate). The 11 active users generated 37 total interactions with a median of 1 interaction per registered user, including 1 early adopter who tested the platform intensively with hundreds/thousands of peptides. Usage patterns showed strong focus on core functionality: sequence processing, result visualization, and analysis tracking. Page views in the authenticated area reflected this focus, with 220 views on the analysis details page, 177 views on the new analysis page, and 129 views on the user analyses list, while institutional/support pages received lower volume as users concentrated on primary features.

User Satisfaction

User satisfaction metrics demonstrated excellent initial experience, with 100% of users finding both the registration and login processes "easy" or "very easy", indicating low friction onboarding. Users successfully identified and used core functionalities, though some experienced initial difficulty finding features, highlighting an opportunity for improved onboarding. High engagement was observed with primary features including sequence processing and result visualization. Qualitative feedback from users emphasized key benefits: speed (faster than manual processes), better result organization, lightweight and easy-to-use interface, independence from scripts or manual database queries, and accessibility through web-based access requiring no installation.

How hard was it to use the platform?
How hard was it to use the platform?
How hard was it to use the platform?
results.
Measurable Results

DegraPro transformed proteomic analysis workflows, reducing time from hours to minutes while improving standardization and reproducibility.

Before (Manual Process):

❌ Export and initial formatting: 30 minutes to 1 hour

❌ Manual data cleaning: 2-6 hours

❌ Script import and setup: 30 minutes to 1 hour

❌ Script execution and adjustments: 2-4 hours

❌ Database integration: 1-3 hours

❌ Result consolidation: 1-2 hours

Total: 8-18 hours (1-3 days in complex scenarios)

After DegraPro:

✅ Automated processing through web interface

✅ Real-time status updates

✅ Immediate result availability

Total: Max 3 hours.

Significant time reduction in post-processing steps

Scientific Impact
Reproducibility:

Standardized process reduces variability

Process tracking enables result reproducibility

Automated steps eliminate manual error sources

Accessibility:

Reduces technical barriers for researchers

Enables researchers with theoretical knowledge but limited practical bioinformatics experience

Supports diverse institutional contexts

Community Contribution:

Open-source code available (via GitHub, upon request)

Transparent methodology and evaluation

Supports scientific reproducibility and collaboration

conclusion.

Reducing the time spent in the process directly accelerates disease identification and the development of new drugs.

DegraPro represents more than a bioinformatics tool; it's a platform that transforms how researchers conduct proteomic analysis with emphasis on degradomics. By centralizing and automating traditionally fragmented, manual post-processing steps, DegraPro has:

Reduced analysis time from 8-18 hours to minutes

Lowered technical barriers for researchers with varying bioinformatics expertise

Improved standardization and reproducibility of analyses

Enhanced accessibility through a web-based, installation-free platform

Accelerated research by freeing researchers from repetitive tasks

The platform demonstrates the power of user-centered design, scientific methodology, and thoughtful technology choices in creating solutions that address real-world problems in scientific research.

As DegraPro continues to evolve, it serves as a foundation for future enhancements that will further accelerate degradomics research and enhance the scientific community's ability to understand proteolytic processes and their roles in biology and disease.

Key Takeaways

What This Case Demonstrates

  1. Problem-Driven Innovation: Deep understanding of user pain points led to a solution that addresses real needs in the scientific community.

  2. User-Centered Design: Direct collaboration with end users ensured the platform meets actual workflow requirements.

  3. Scientific Rigor: The application of the DSRM methodology provided structure while maintaining a practical focus.

  4. Technical Excellence: Careful technology selection and architecture decisions enabled efficient development and a scalable solution.

  5. Iterative Development: Prototyping, testing, and iteration improved the solution before full deployment.

  6. Performance Validation: Systematic performance testing validated technical decisions and identified future scalability paths.

  7. Impact Measurement: Comprehensive evaluation (usage metrics, user satisfaction, performance testing) provided evidence of success and areas for improvement.

  8. Community Contribution: Open-source approach and transparent methodology support scientific reproducibility and collaboration.

Skills Demonstrated

Product Design: User research, requirements gathering, information architecture, prototyping, usability testing

User Experience Design: User journey mapping, interface design, interaction design, accessibility

Technical Architecture: System design, API integration, database design, performance optimization

Scientific Methodology: Design Science Research Methodology, systematic evaluation, reproducible research

Cross-Functional Collaboration: Working with researchers, bioinformaticians, and scientific community

Performance Engineering: Load testing, performance analysis, scalability planning

Product Management: Feature prioritization, user feedback integration, roadmap planning

Scientific Communication: Documentation, methodology transparency, community engagement

thanks for reading.

I hope you enjoyed this case overview. While most of the project details are protected by a non-disclosure agreement (NDA), I'd be happy to discuss my role and contributions further in a live meeting. Check the contacts below and feel free to reach out.

Create a free website with Framer, the website builder loved by startups, designers and agencies.