Projects, Process efficiency | Oleg Braginsky, Maksim Golub
Scattered real estate data itself worth nothing. But when you extract it, process, combine, there is a chance to find real gems. With student Maksim Golub and founder of School of Troubleshooters Oleg Braginsky we will do a deep dive on how integrating with different data sources help business to get valuable insights.
PropTech company was looking for another revenue stream that will increase the engagement of their B2B customers, ramp up LTV metrics and tap into new revenue stream. There was already a solid core product generating value as it found its product market fit, targeting an audience in design planning departments.
After customer development and hypothesis testing, which is a separate story, there was discovered an insight: one of the challenges for real estate folks is to decide of what to do next – whether they build a new project or need to optimise existing portfolio to hit their targets: revenue, metrics, happy employees.
The missing thing was the data. It was not completely out of the radar. Many sources and systems were there: room booking, motion sensors, access card, network traffic, calendar, messengers, temperature sensors. The main question was – what and how to get, and what to do with it after to extract the benefits.
Instead of listing all the possible sources we investigated the customer journey – what they interact with. The team was trying to wrap their heads around successful scenarios: actionable recommendations, insights, saved space, more money made or preserved, people are happier, biggest gaps are discovered.
Second iteration – figure out what we might missed and whether it has a potential: Wi-Fi spots log, Google Calendar, Zoom meetings, Slack metadata. A few of these options were pursued, while running effort/value analysis. But it led to discovering another source of data – people. Separate story about this matter too.
The implementation was planned into few stages to move gradually. The first stage is to make a Proof of Concept. Make sure we can consume data and then simply display it on the dashboard. It was tempting to start with the API integration, but being able to process a simple CSV was the proper way to go about it.
Instead of looking at and copying what others had, we decided to focus on what we want to achieve and how to make the data work for us, not against. After a few iteration cycles, draft was introduced and agreed across the product squad. We acknowledge that it will not be set in stone forever and subject to change.
When designing it, we realised that some of the event may not come with the “End” date, so there is should be a mechanic to close the loop. Concisely, if there is no single record of when an event ended, there will be extra 15 minutes to close it. The very original data would be persisted to find gaps if things went wrong.
As for the exchange between customers, we opted for an SFTP. For each company there was a folder to upload their data. To ensure nothing will be missed, processed, and failed files were saved separately. Some of the clients preferred to just share access to their own file storage, so we had to adapt to it as well.
As data volumes increased, we introduced API integrations and webhooks to support real-time communication. This decreased the need for manual uploads and enabled clients to push data directly and receive instant status updates. The shift reduced turnaround times and improved overall process efficiency.
Querying and processing of information were simple on the paper but quite challenging in practice. One of the issues was false positives: there could be noise points: e.g., someone would show up for few minutes in the room and then leave. Setting a framework to filter this helped with such annoying misinterpretations.
Then there were some dynamic changes to consider. E.g.: how to recognize and treat an event if the meeting was created but then changed or even removed. On top of that, all systems would send this differently. Key takeaway: having raw data and understanding specifics for each is never a bad idea at all.
Surprises sometimes also had a positive connotation. It was discovered that some platforms come with their own insights system. E.g.: feature Signs of Life allowed distinguish between people and items, activity, and inactivity. With that it was easier to say that a guy with a big backpack on a chair is still a single person.
Once the data is in place, it is time for mapping. What you need to do is to get the list of the rooms in the right way with correct properties. The reality is that all medium and enormous size companies have it in their own way. Close and tight collaboration during the onboarding stage would help to clarify what is what.
The raw data will not be helpful if we do not know where exactly events occurred. It would be easy if there were just one place with multiple rooms. It turned out that the structure of space inventory had a hierarchy. The second stage of the mapping process was to figure out where each unit belongs in the physical world.
A lot of blood, sweat and tears here: unmapped rooms. back mapping, wrong connections, same or similar naming, mix of internal systems, changing schemas, getting signals when there was a new room added. Sometimes even customers did not know what was really going on, as there were multiple teams involved.
To handle this, we built fallback logic, maintained internal mapping tables, and introduced sanity checks to catch mismatches early. We also set alerting for unknown entries, helping us stay ahead of silent failures. Collaboration with customer tech teams became a key, requiring direct Slack comms and weekly syncs.
The mapping tool evolved rapidly to keep up with growing complexity. It supported flexible CSV-based configurations and could combine data from multiple source types, including SFTP, API, and direct integrations. This allowed us to standardise inputs, reduce custom logic, and onboard new clients faster.
As the team scaled and moved fast, we delegated operational tasks to Ops. We ran regular enablement sessions weekly to walk them through tools, edge cases, workflows. This not only improved handover efficiency but also helped them confidently support all customer demos and day-to-day troubleshooting.
Working with it allows us to develop a dedicated framework on always double checking the data, keeping logs, ensuring traceability. Each partner would introduce a challenge: e.g., how would you recognize and put together multiple sensors in the multiple room if they have nothing but just ids? Welp, it was solved ;)
Another thing was how to get the queuing correctly. For example, you want to get the occupation percentage week after week. Ignore the definition for now, what matters is the set of rules: working hours, shifts, reducing the weight of the weekends, considering public holidays, emergencies of various kinds.
We tackled this through rigorous testing—first internally, then by gradually involving QA as scenarios stabilised. Formulas were double-checked against expected outcomes and edge cases. In parallel, researched cases that’d skew the data—unusual work patterns, partial outages, or gaps in sensor input.
Then terminology. The definition of an occupation varies differently for all companies. Better to stick to mathematical values and then extrapolate from there. For example: averages do a poor job, medians help with extremes, percentiles so far are the most efficient. All metrics had tooltips to make it easier for users.
To help ourselves during MVP and first stage we put together an admin panel. I would compare it with any of the back offices from real life: café, restaurant, or hotel. It is something that was built fast, works well, and gets the job done ignoring the statistic. It allowed us to see logs, clean, wipe or just fix spoiled records.
The last but the most important part was serving the output to users. Thinking on what customers should see we looked at it from several points of view: tactical and strategical points of view, ability to interpret the findings – via numbers or through the visual highlights, allowing switching between the layers smoothly.
For a strategic view there was a dashboard showing what is going on across the portfolio with an ability to dive deeper. Expecting customers may want to get more answers sooner, on top of the native UI we added embedded widgets from BI systems, so that folks from the data team would help to ship them fast if needed.
2D models of floors got and update too. Say, it would be interesting to explore 3D, but over the next iterations, it may make things fancy but would not add a lot of value. Hence, the utilization was simply shown as color-coded sectors with an option to adjust filter by date, sensors, or types of measurement.
Then there was a table view grouping units by their types. E.g., meeting rooms of various capacity, pods or even groups of individual workstations. This is where we thoroughly checked whether the calculation works correctly, as the data could be skewed on a bigger scale if put together wrongly at the very beginning.
Diverse types of data opened new ways of interpretation. When you see how the rooms were booked, who exactly was in there, which employee used their card to enter building, which department they belong, and even know their names – this allowed discovered more insights that, otherwise, would keep stay unnoticed.
Regularly exchanging updates with teams keeps everyone aligned, informed, and working toward the same goals. It creates opportunities for others to suggest improvements, flag issues early, or share relevant experience. This kind of open communication strengthens collaboration, drives efficiency across the board.
Building partnerships with vendors strengthens the business by aligning efforts around shared goals and customer needs. Participating in joint presentations or panel discussions on industry-relevant topics increases visibility, and positions the brand as one of thought leaders, and creates mutual value.
Always keep in mind business and people. A product means nothing without strategic discovery tied to key stakeholders. In B2B, buy-in is a long-term, staged process. It’s not just what you build, but how you align, influence, and deliver. Discovery should de-risk decisions, validate needs, and build internal momentum.
Building the integrations is just a starting point. It is more important what you can do with it afterwards. The information could help to identify bigger trends based on hundreds of small signals. It helps to build a valuable set of recommendations, saving hundreds of thousands of dollars, square feet, or human hours.